“Algorithms and Bias: Q. and A. With Cynthia Dwork” by Clair Cain Miller

Interview featured in the NY Times:

“Q: Some people have argued that algorithms eliminate discrimination because they make decisions based on data, free of human bias. Others say algorithms reflect and perpetuate human biases. What do you think?

A: Algorithms do not automatically eliminate bias. Suppose a university, with admission and rejection records dating back for decades and faced with growing numbers of applicants, decides to use a machine learning algorithm that, using the historical records, identifies candidates who are more likely to be admitted. Historical biases in the training data will be learned by the algorithm, and past discrimination will lead to future discrimination.

 

(…) Q: You have studied both privacy and algorithm design, and co-wrote a paper, “Fairness Through Awareness,” that came to some surprising conclusions about discriminatory algorithms and people’s privacy. Could you summarize those?

A: “Fairness Through Awareness” makes the observation that sometimes, in order to be fair, it is important to make use of sensitive information while carrying out the classification task. This may be a little counterintuitive: The instinct might be to hide information that could be the basis of discrimination.Continue reading

Robots being bullied

This video show kids being bullies in a mall – victim this time was a robot.

 

It may relate to another recent case of a robot being vandalized in his hitchthiking trip.

For the time being such experiments are very useful to bring light to human behavior.  Let’s say someone argues that as many other human behaviour this might be one society opts to discourage.  And as in many other cases, way to set limits is to grant victims rights not be be molested.

As we are likely to see such events recurring more often, robot rights defendants may have a growing number of examples on their side.

Artificial Intelligence tackles the Internet of Things

In “Connecting artificial intelligence with the internet of things” Andy Meek discusses some pros and cons in the future of merging Artificial Intelligence and the Internet of Things.  Reasons to be optimist, pitfalls and debate on its fears.

And Stephen Brennan’s “The Next Big Thing Is The Continuum” story is on how tech is the trends and challenges tech industry faces in trying to merge A.i. and I.O.T. in one new environment.

“How to Help Self-Driving Cars Make Ethical Decisions” by Will Knight

From MIT Tech Review:

“Fully self-driving vehicles are still at the research stage, but automated driving technology is rapidly creeping into vehicles. (…)

As the technology advances, however, and cars become capable of interpreting more complex scenes, automated driving systems may need to make split-second decisions that raise real ethical questions.

(…) a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van.

(…) “If that would avoid the child, if it would save the child’s life, could we injure the occupant of the vehicle? (…)

Others believe the situation is a little more complicated. For example, Bryant Walker-Smith (…) says plenty of ethical decisions are already made in automotive engineering. “Ethics, philosophy, law: all of these assumptions underpin so many decisions,” he says. “If you look at airbags, for example, inherent in that technology is the assumption that you’re going to save a lot of lives, and only kill a few.”

(…) “The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.”  read full article

 

Adaptive data analysis

In “The reusable holdout: Preserving validity in adaptive data analysis” published in Science researchers Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, Aaron Roth addresses the issue of data analysis adaptivity.

Applying thumb rules such as the same 5% significance test many learn when introduced to scientific method at school sometimes corroborate misleading ‘discoveries’.  Data analysis often enough is made through a re-interpretation of statistics.  So that conclusions carry much of our models and how we interpret raw data in the first place.

Author Moritz Hardt posted an interesting introduction to the paper in Google Research Blog.

how many words for a picture?

Researchers Andrej Karpathy and Li Fei-Fei presents in Deep Visual-Semantic Alignments for Generating Image Descriptions a model for estimating natural language description of images.

They share part of the code on Github so people can train their neural network to describe images.

Visual information and its correspondent descriptions such as below:

“girl in pink dress is jumping in air.”

“woman is holding bunch of bananas.”

“a young boy is holding a baseball bat.”
this is generated in layers of smantic correspondence, such as below:

“Build-a-brain” by Michael Graziano

article featured in aeon.co

“The brain is a machine: a device that processes information. (…) [and] somehow the brain experiences its own data. It has consciousness. How can that be?

That question has been called the ‘hard problem’ of consciousness (…)
Here’s a more pointed way to pose the question: can we build it? (…)

I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. (…)

In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow(…) “.much more to read – go to full article

Autonomous Weapons Open Letter from the Future of Life institute

Future of Life Institute published this “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”:

“Autonomous Weapons: an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“Killer robots the soldiers that never sleep” by Simon Parkin

from BBC, article covering South Korean automated machine gun.

“(…) Daejeon, a city in central South Korea, a machine gun turret idly scans the horizon. (…)

A gaggle of engineers standing around the table flinch as, unannounced, a warning barks out from a massive, tripod-mounted speaker. A targeting square blinks onto the computer screen, zeroing in on a vehicle that’s moving in the camera’s viewfinder. (…) The speaker (…) has a range of three kilometres. (…) “Turn back,” it says, in rapid-fire Korean. “Turn back or we will shoot.”

The “we” is important. The Super aEgis II, South Korea’s best-selling automated turret, will not fire without first receiving an OK from a human. (…)

The past 15 years has seen a concerted development of such automated weapons and drones. (…) . Robots reduce the need for humans in combat and therefore save the lives of soldiers, sailors and pilots.

(…) . The call from Human Rights Watch for an outright ban on “the development, production, and use of fully autonomous weapons” seems preposterously unrealistic. Such machines already exist and are being sold on the market – albeit with, as DoDAAM’s Park put it, “self-imposed restrictions” on their capabilities.

(…) Things become more complicated when the machine is placed in a location where friend and foe could potentially mix (…)

If a human pilot can deliberately crash an airliner, should such planes have autopilots that can’t be over-ruled?

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians.(…)  In this context the provision of human overrides make sense.(…)

Some believe the answer, then, is to mimic the way in which human beings build an ethical framework and learn to reflect on different moral rules, making sense of which ones fit together. (…)  At DoDAAM, Park has what appears to be a sound compromise. “When we reach the point at which we have a turret that can make fully autonomous decisions by itself, we will ensure that the AI adheres to the relevant army’s manual. We will follow that description and incorporate those rules of engagement into our system.”

‘(…)

The clock is ticking on these questions. (…)  Regardless of what’s possible in the future, automated machine guns capable of finding, tracking, warning and eliminating human targets, absent of any human interaction already exist in our world. Without clear international regulations, the only thing holding arms makers back from selling such machines appears to be the conscience, not of the engineer or the robot, but of the clients. “If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

 

“Human Curation Is Back” by Jean-Louis Gassée

article featured in Monday Note:

“(…) The limitations of algorithmic curation of news and culture has prompted a return to the use of actual humans to select, edit, and explain. Who knows, this might spread to another less traditional media: apps.

(…)  Another type of curator, one whose role as trustee is even more crucial, is a newspaper’s editor-in-chief. (…)

With search engines, we see a different kind of curator: algorithms. Indefatigable, capable of sifting through literally unimaginable amounts of data, algorithms have been proffered as an inexpensive, comprehensive, and impartial way to curate news, music, video — essentially everything.

The inexpensive part has proved to be accurate; comprehensive and impartial less so. (…)

Certainly, algorithms can be built to perform specialized feats of intelligence such as beating a world-class chess player or winning at Jeopardy. (…) But ask a computer scientist for the meaning of meaning, for an algorithm that can extract the meaning of a sentence and you will either elicit a blank look, or an obfuscating discourse that, in fact, boils down to a set of rules, of heuristics, that yield an acceptable approximation. (…) ”

read full article

New acciddent with Google’s self driving car. Again, stationary. Now with injuries

Chris Urmson, from Google’s self driving car project, posted a new ‘chapter’ about their experience in learning about self driving vehicles.

Far from celebrate the accidents, such events are critical to understanding how accidents really happen.  Even when you are stationary, or it’s not your fault.  As all drivers learn (or know intuitively one may argue) if a car comes the wrong way straight to your car you’ll be sorry for the outcome – no matter who to blame.

This video is part of the post as an output of information car’s system was dealing with.

 

Brain interconnected as an intranet

In “Building an organic computing device with multiple interconnected brains” researchers Miguel Pais-Vieira, Gabriela Chiuffa, Mikhail Lebedev, Amol Yadav, and Miguel A. L. Nicolelis introduces application of brain-to-brain interfaces.

Such interfaces are ways to receive from and send stimuli directly to animal’s brains.  In this papers experiments, rats.

Applications such as animal social behavior, sensorial phenomena and other insight into animal cognitive process are in the prospect of such – rather invasive – techniques.

Indirectly, it may be very intersting to use such neurological logs in reverse: how should our own, A.I. neural systems benefit from interconnectivity?