On unaccountable algorithms that watch and profile all

Digital star chamber” featured on aeon magazine – by Frank Pasquale:

“The infancy of the internet is over. As online spaces mature, Facebook, Google, Apple, Amazon, and other powerful corporations are setting the rules that govern competition among journalists, writers, coders, and e-commerce firms. (…)

Algorithms are increasingly important because businesses rarely thought of as high tech (…) are collecting data from both workers and customers, using algorithmic tools to make decisions, to sort the desirable from the disposable.(…)

For wines or films, the stakes are not terribly high. But when algorithms start affecting critical opportunities for employment, career advancement, health, credit and education, they deserve more scrutiny. (…)

Such controversies have given rise to a movement for algorithmic accountability. At ‘Governing Algorithms’, a 2013 conference at New York University, a community of scholars and activists coalesced to analyse the outputs of algorithmic processes critically. Today these scholars and activists are pushing a robust dialogue on algorithmic accountability, or #algacc for short. Like the ‘access to knowledge’ (A2K) mobilisation did in the 2000s, #algacc turns a spotlight on a key social justice issue of the 2010s.

(…) When false, damaging information can instantly spread between databases, but take months or years of legwork and advocacy to correct, the data architecture is defective by design. (…) Data collection problems go beyond inaccuracy. Some data methods are just too invasive to be permitted in a civilised society.

(…) While most privacy activists focus on the collection issue, the threat posed by reckless, bad, or discriminatory analysis may well be more potent. (…) Consider racism first. There is a long and troubling history of discrimination against minorities. Extant employment discrimination laws already ban bias, and can result in hefty penalties. So, many advocates of algorithmic decision-making say, why worry about our new technology? Discrimination in any form – personal, technological, what have you – is already banned. This is naïve at best. Algorithmic decision-making processes collect personal and social data from a society with a discrimination problem. Society abounds with data that are often simple proxies for discrimination – zip or postal codes, for example.

(…)  Protected by trade secrecy, many algorithms remain impenetrable to outside observers. When they try to unveil them, litigants can face a Catch-22. Legitimately concerned to stop ‘fishing expeditions’, courts are likely to grant discovery requests only if a plaintiff has accumulated some quantity of evidence of discrimination. But if the key entity making a decision was a faceless ‘black boxed’ algorithm, what’s the basis for an initial suspicion of discrimination?

 

(…) When the problems with algorithmic decision-making come to light, big firms tend to play a game of musical expertise. Lawyers say, and are told, they don’t understand the code. Coders say, and are told, they don’t understand the law. Economists, sociologists, and ethicists hear variations on both stonewalling stances. (…)”

read full story