“After the crash, can biologists fix economics?” by Kate Douglas

Article featured by New Scientist.

“THE GLOBAL financial crisis of 2008 took the world by surprise. (…)  there is a growing feeling that orthodox economics can’t provide the answers to our most pressing problems, such as why inequality is spiralling. (…)

 The stated aim of this Ernst Strüngmann Forum at the Frankfurt Institute for Advanced Studies was to create “a new synthesis for economics”(…)   – an unlikely alliance of economists, anthropologists, ecologists and evolutionary biologists – (…)   hope their ideas will mark the beginning of a new movement to rework economics using tools from more successful scientific disciplines.

(…)

The problems start with Homo economicus, a species of fantasy beings who stand at the centre of orthodox economics. (…)  Over the years, there have been various attempts to inject more realism into the field by incorporating insights into how humans actually behave. (…)  

But the complexities introduced by behavioural economics make it too unwieldy to be applied across the board. (…)

Its aim was to try to address the macroeconomic problem by looking to psychology, anthropology, evolutionary biology and our growing understanding of the dynamics of collective behaviour. (…)

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias – our tendency to copy successful or prestigious individuals – influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders (…)

Take the popular notion of the “wisdom of the crowd” (…)   “This is often misplaced,” says Couzin, who studies collective behaviour in animals (…)  .  Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions – and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

(…)

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling – computer programs that give virtual economic agents differing characteristics that in turn determine interactions. (…)

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.” read full article

“Build-a-brain” by Michael Graziano

article featured in aeon.co

“The brain is a machine: a device that processes information. (…) [and] somehow the brain experiences its own data. It has consciousness. How can that be?

That question has been called the ‘hard problem’ of consciousness (…)
Here’s a more pointed way to pose the question: can we build it? (…)

I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. (…)

In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow(…) “.much more to read – go to full article

Autonomous Weapons Open Letter from the Future of Life institute

Future of Life Institute published this “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”:

“Autonomous Weapons: an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“Killer robots the soldiers that never sleep” by Simon Parkin

from BBC, article covering South Korean automated machine gun.

“(…) Daejeon, a city in central South Korea, a machine gun turret idly scans the horizon. (…)

A gaggle of engineers standing around the table flinch as, unannounced, a warning barks out from a massive, tripod-mounted speaker. A targeting square blinks onto the computer screen, zeroing in on a vehicle that’s moving in the camera’s viewfinder. (…) The speaker (…) has a range of three kilometres. (…) “Turn back,” it says, in rapid-fire Korean. “Turn back or we will shoot.”

The “we” is important. The Super aEgis II, South Korea’s best-selling automated turret, will not fire without first receiving an OK from a human. (…)

The past 15 years has seen a concerted development of such automated weapons and drones. (…) . Robots reduce the need for humans in combat and therefore save the lives of soldiers, sailors and pilots.

(…) . The call from Human Rights Watch for an outright ban on “the development, production, and use of fully autonomous weapons” seems preposterously unrealistic. Such machines already exist and are being sold on the market – albeit with, as DoDAAM’s Park put it, “self-imposed restrictions” on their capabilities.

(…) Things become more complicated when the machine is placed in a location where friend and foe could potentially mix (…)

If a human pilot can deliberately crash an airliner, should such planes have autopilots that can’t be over-ruled?

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians.(…)  In this context the provision of human overrides make sense.(…)

Some believe the answer, then, is to mimic the way in which human beings build an ethical framework and learn to reflect on different moral rules, making sense of which ones fit together. (…)  At DoDAAM, Park has what appears to be a sound compromise. “When we reach the point at which we have a turret that can make fully autonomous decisions by itself, we will ensure that the AI adheres to the relevant army’s manual. We will follow that description and incorporate those rules of engagement into our system.”

‘(…)

The clock is ticking on these questions. (…)  Regardless of what’s possible in the future, automated machine guns capable of finding, tracking, warning and eliminating human targets, absent of any human interaction already exist in our world. Without clear international regulations, the only thing holding arms makers back from selling such machines appears to be the conscience, not of the engineer or the robot, but of the clients. “If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”