“After the crash, can biologists fix economics?” by Kate Douglas

Article featured by New Scientist.

“THE GLOBAL financial crisis of 2008 took the world by surprise. (…)  there is a growing feeling that orthodox economics can’t provide the answers to our most pressing problems, such as why inequality is spiralling. (…)

 The stated aim of this Ernst Strüngmann Forum at the Frankfurt Institute for Advanced Studies was to create “a new synthesis for economics”(…)   – an unlikely alliance of economists, anthropologists, ecologists and evolutionary biologists – (…)   hope their ideas will mark the beginning of a new movement to rework economics using tools from more successful scientific disciplines.

(…)

The problems start with Homo economicus, a species of fantasy beings who stand at the centre of orthodox economics. (…)  Over the years, there have been various attempts to inject more realism into the field by incorporating insights into how humans actually behave. (…)  

But the complexities introduced by behavioural economics make it too unwieldy to be applied across the board. (…)

Its aim was to try to address the macroeconomic problem by looking to psychology, anthropology, evolutionary biology and our growing understanding of the dynamics of collective behaviour. (…)

Using a mathematical model of price fluctuations, for example, Bell has shown that prestige bias – our tendency to copy successful or prestigious individuals – influences pricing and investor behaviour in a way that creates or exacerbates market bubbles.

We also adapt our decisions according to the situation, which in turn changes the situations faced by others, and so on. The stability or otherwise of financial markets, for instance, depends to a great extent on traders (…)

Take the popular notion of the “wisdom of the crowd” (…)   “This is often misplaced,” says Couzin, who studies collective behaviour in animals (…)  .  Couzin and his colleagues showed last year that the wisdom of the crowd works only under certain conditions – and that contrary to popular belief, small groups with access to many sources of information tend to make the best decisions.

(…)

Taking into account such effects requires economists to abandon one-size-fits-all mathematical formulae in favour of “agent-based” modelling – computer programs that give virtual economic agents differing characteristics that in turn determine interactions. (…)

Orthodox economics likes to portray economies as stately ships proceeding forwards on an even keel, occasionally buffeted by unforeseen storms. Kirman prefers a different metaphor, one borrowed from biology: economies are like slime moulds, collections of single-celled organisms that move as a single body, constantly reorganising themselves to slide in directions that are neither understood nor necessarily desired by their component parts.” read full article

“Build-a-brain” by Michael Graziano

article featured in aeon.co

“The brain is a machine: a device that processes information. (…) [and] somehow the brain experiences its own data. It has consciousness. How can that be?

That question has been called the ‘hard problem’ of consciousness (…)
Here’s a more pointed way to pose the question: can we build it? (…)

I’ve made my own entry into that race, a framework for understanding consciousness called the Attention Schema theory. The theory suggests that consciousness is no bizarre byproduct – it’s a tool for regulating information in the brain. And it’s not as mysterious as most people think. (…)

In this article I’ll conduct a thought experiment. Let’s see if we can construct an artificial brain, piece by hypothetical piece, and make it conscious. The task could be slow and each step might seem incremental, but with a systematic approach we could find a path that engineers can follow(…) “.much more to read – go to full article

Autonomous Weapons Open Letter from the Future of Life institute

Future of Life Institute published this “Autonomous Weapons: an Open Letter from AI & Robotics Researchers”:

“Autonomous Weapons: an Open Letter from AI & Robotics Researchers
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

“Killer robots the soldiers that never sleep” by Simon Parkin

from BBC, article covering South Korean automated machine gun.

“(…) Daejeon, a city in central South Korea, a machine gun turret idly scans the horizon. (…)

A gaggle of engineers standing around the table flinch as, unannounced, a warning barks out from a massive, tripod-mounted speaker. A targeting square blinks onto the computer screen, zeroing in on a vehicle that’s moving in the camera’s viewfinder. (…) The speaker (…) has a range of three kilometres. (…) “Turn back,” it says, in rapid-fire Korean. “Turn back or we will shoot.”

The “we” is important. The Super aEgis II, South Korea’s best-selling automated turret, will not fire without first receiving an OK from a human. (…)

The past 15 years has seen a concerted development of such automated weapons and drones. (…) . Robots reduce the need for humans in combat and therefore save the lives of soldiers, sailors and pilots.

(…) . The call from Human Rights Watch for an outright ban on “the development, production, and use of fully autonomous weapons” seems preposterously unrealistic. Such machines already exist and are being sold on the market – albeit with, as DoDAAM’s Park put it, “self-imposed restrictions” on their capabilities.

(…) Things become more complicated when the machine is placed in a location where friend and foe could potentially mix (…)

If a human pilot can deliberately crash an airliner, should such planes have autopilots that can’t be over-ruled?

Likewise, a fully autonomous version of the Predator drone may have to decide whether or not to fire on a house whose occupants include both enemy soldiers and civilians.(…)  In this context the provision of human overrides make sense.(…)

Some believe the answer, then, is to mimic the way in which human beings build an ethical framework and learn to reflect on different moral rules, making sense of which ones fit together. (…)  At DoDAAM, Park has what appears to be a sound compromise. “When we reach the point at which we have a turret that can make fully autonomous decisions by itself, we will ensure that the AI adheres to the relevant army’s manual. We will follow that description and incorporate those rules of engagement into our system.”

‘(…)

The clock is ticking on these questions. (…)  Regardless of what’s possible in the future, automated machine guns capable of finding, tracking, warning and eliminating human targets, absent of any human interaction already exist in our world. Without clear international regulations, the only thing holding arms makers back from selling such machines appears to be the conscience, not of the engineer or the robot, but of the clients. “If someone came to us wanting a turret that did not have the current safeguards we would, of course, advise them otherwise, and highlight the potential issues,” says Park. “But they will ultimately decide what they want. And we develop to customer specification.”

 

“Facebook Instant Articles Just Don’t Add Up for Publishers” By Michael Wolff

From MIT Tech Review:

“(…) digital content is being divided between a lucrative high-end entertainment world, where licensors receive a negotiated fee for allowing the distribution of their property, and a low-end publishing world where content is expected to be “free,” supporting itself on often elusive advertising sales and ad splits. In this particular deal, publishers can sell ads on their articles and keep all of the revenue, or have Facebook sell ads in exchange for 30 percent.

(…) The more immediate question is about whether Facebook’s “instant articles” and other republishing initiatives are digging a deeper hole for publishers or helping them get out of the one they are already in.

(…)

When the Facebook “instant articles” deal was first proposed last fall, there was no provision at all for a financial exchange. From Facebook’s point of view, it was just a further service to users and publishers. If it hosted the Times’ content it would load faster—hence a better experience for Facebook users clicking to a shared Times story. The Times and other publishers should do this, Facebook reckoned, because it would get them greater exposure to Facebook’s vast audience. It was promotional.

After some limited pushback from the publishers, the deal now resembles a conventional digital ad split—of the kind made ubiquitous by Google AdSense. That is, if Facebook sells against this content through its networks, it splits the revenues with the publisher. If the publisher sells the ad, as though in a free-standing insert model, it keeps what it kills. (Exactly the model that has consistently lowered digital ad prices—the inevitable discounting when you have many sellers of the same space.)

(…)

But what of the New York Times? (…) This is further puzzling because the Times has built a digital subscription business of almost a million users. Why subscribe to the Times if you can read it for free on Facebook?

Of course, the subscription business will not support the Times alone (indeed, its growth appears to be seriously slowing)—it needs advertising too. Most of the advertising that pays for most of the Times’ costs still comes from the actual newspaper. That revenue stream is declining quickly, however, and is far from being replaced by digital ads, which in the first quarter of 2015 yielded only $14 million a month in revenue (15 years ago, before digital balkanized the business, the Times was averaging more than $100 million a month in ad revenue).

These measly ad dollars are in part a function of the fact that Google and Facebook together take 52 percent of all digital advertising. (…)

And now, in the prevalent view, there is simply no turning back. The math has changed. The New York Times may once have made more than $100 million a month in advertising revenue on a 1.5 million circulation base; now it makes $14 million on 50 million monthly visitors on the digital side of the business. So it will need something like 350 million users to make equivalent money—which, bizarrely, Facebook might possibly provide. Except, of course, that the more numbers go up, in digital math, the more their value goes down. But pay no attention.” read full article

 

“Human Curation Is Back” by Jean-Louis Gassée

article featured in Monday Note:

“(…) The limitations of algorithmic curation of news and culture has prompted a return to the use of actual humans to select, edit, and explain. Who knows, this might spread to another less traditional media: apps.

(…)  Another type of curator, one whose role as trustee is even more crucial, is a newspaper’s editor-in-chief. (…)

With search engines, we see a different kind of curator: algorithms. Indefatigable, capable of sifting through literally unimaginable amounts of data, algorithms have been proffered as an inexpensive, comprehensive, and impartial way to curate news, music, video — essentially everything.

The inexpensive part has proved to be accurate; comprehensive and impartial less so. (…)

Certainly, algorithms can be built to perform specialized feats of intelligence such as beating a world-class chess player or winning at Jeopardy. (…) But ask a computer scientist for the meaning of meaning, for an algorithm that can extract the meaning of a sentence and you will either elicit a blank look, or an obfuscating discourse that, in fact, boils down to a set of rules, of heuristics, that yield an acceptable approximation. (…) ”

read full article

“Conspiracists Concur: Climate Change Is a Colossal Cover-Up” by Richard Martin

Article from MIT Tech Review covering a patchwork of articles on the theme.

“(…) That climate deniers are also conspiracy buffs might seem like one of those dog-bites-man findings for which social scientists are often ridiculed (“People in love do foolish things, study concludes”). But the background to this study is actually more interesting than its conclusion.

Published in the Journal of Social and Political Psychology, the new paper, “Recurrent Fury: Conspiratorial Discourse in the Blogosphere,” is based on an examination of blog comments in response to the authors’ previous paper, “Recursive Fury: Conspiracist Ideation in the Blogosphere”—itself a follow-up to their original study, “NASA Faked the Moon Landing—Therefore, (Climate) Science Is a Hoax: An Anatomy of the Motivated Rejection of Science,” published in Psychological Science in 2012. In other words, commenters responding (mostly angrily) to two studies of conspiratorial thought have accused the authors of being part of a massive conspiracy.

(…)

The British newspaper The Telegraph has helpfully compiled a list of the most widely cited climate-change theories (…) a plot against the United States, a plot against Asia, and a plot against Africa. A vast right-wing conspiracy, or a dark plot from the left.(…) climate change was dreamed up by Margaret Thatcher as part of her campaign to break the U.K. coal unions.

(…) “Science literacy promoted polarization on climate, not consensus,” writes Achenbach from National Geographic. (…)  A well-designed experiment is no match for a Weltanschauung. This is most clearly understood by Thomas Pynchon, the greatest modern novelist of paranoia. “There is something comforting—religious, if you want—about paranoia,” Pynchon wrote in Gravity’s Rainbow. The alternative is “anti-paranoia, where nothing is connected to anything, a condition not many of us can bear for long.” read full article

“Ancestry Moves Further into Consumer Genetics” by Anna Nowogrodzki

article featured in MIT Tech covering new service by Ancestry.

“Ancestry entered the field of consumer DNA analysis in 2012 with the launch of AncestryDNA, a $99 spit test that will analyze your DNA – five years after 23andMe began to offer similar DNA-testing kits.

… Ancestry has an advantage over 23andMe in that it already has millions of users’ family trees. AncestryHealth capitalizes on this: the free service will import both family tree data from Ancestry and genetic data…

…family history is often the first thing doctors ask for to assess health risks, and AncestryHealth is betting that people would rather print out that history from a free website than dredge their memories for half-forgotten details in the five minutes before their doctor’s appointment.

And Ancestry is hoping to sell that data for medical research purposes. …

…“With the blessing of the FDA and regulators, we would like to communicate with that consumer, whether that is through a physician or a genetic counselor,” says Chahine.” read full article

Genomics as a Big Data science

in  “Big Data: Astronomical or Genomical?” researchers discuss the emergence of genomics as a big data science and it’s consequences in terms of techniques and methodology.

New technologies are required to meet the computational challenges.  Genomical progress challenges scientific community for concerted effort.

Their research compares genomics with Astronomy, Youtube, and Twitter – all major source from the tons of new data being added lately.

“The spiritual use of an orchard or garden of fruit trees” by Ralph Austen

Published first in 1653 as a companion to the book  A Treatise on Fruit-trees, showing the manner of grafting, setting, pruning, and ordering of them in all respects. this book is a testament to the fact that love for trees and the recognition of the benefits of human contact with trees is far from a XYZ generation fad.

Brain interconnected as an intranet

In “Building an organic computing device with multiple interconnected brains” researchers Miguel Pais-Vieira, Gabriela Chiuffa, Mikhail Lebedev, Amol Yadav, and Miguel A. L. Nicolelis introduces application of brain-to-brain interfaces.

Such interfaces are ways to receive from and send stimuli directly to animal’s brains.  In this papers experiments, rats.

Applications such as animal social behavior, sensorial phenomena and other insight into animal cognitive process are in the prospect of such – rather invasive – techniques.

Indirectly, it may be very intersting to use such neurological logs in reverse: how should our own, A.I. neural systems benefit from interconnectivity?

Are we fostering A.I. that will be compassionate of ourselves?

excerpts from “Friendly Artificial Intelligence: Parenthood and the Fear of Supplantation”  by Chase Uy, at Ethical Technology

“…Much of the discourse regarding the hypothetical creation of artificial intelligence often views AI as a tool for the betterment of humankind—a servant to man. (…) These papers often discuss creating an ethical being yet fail to acknowledge the ethical treatment of artificial intelligence at the hands of its creators. (…)

Superintelligence is inherently unpredictable (…) that does not mean ethicists and programmers today cannot do anything to bias the odds towards a human­friendly AI; like a child, we can teach it to behave more ethically than we do. Ben Goertzel and Joel Pitt discuss the topic of ethical AI development in their 2012 paper, “Nine Ways to Bias the Open­Source AGI Toward Friendliness.” (…)

Goertzel and Pitt propose that the AGI must have the same faculties, or modes of communication and memory types, that humans have in order to acquire ethical knowledge. These include episodic memory (the assessment of an ethical situation based on prior experience); sensorimotor memory (the understanding of another’s feelings by mirroring them); declarative memory (rational ethical judgement); procedural memory (learning to do what is right by imitation and reinforcement); attentional memory (understanding patterns in order to pay attention to ethical considerations at appropriate times); intentional memory (ethical management of one’s own goals and motivations) (Goertzel & Pitt 7).

The idea that an AI must have some form of sensory functions and an environment to interact with is also discussed by James Hughes in his 2011 book, Robot Ethics, in the chapter, “Compassionate AI and Selfless Robots: A Buddhist Approach”. (…) This method proposes that in order for a truly compassionate AI to exist, it must go through a state of suffering and, ultimately, self­transcendance.

(…)

Isaac Asimov’s Three Laws of Robotics are often brought up in discussions about ways to constrain AI.(…). The problematic nature of these laws (…) allow for the abuse of robots, they are morally unacceptable (Anderson & Anderson 233). (…)  It does not make sense to have the goal of creating an ethically superior being while giving it less functional freedom than humans.

From an evolutionary perspective, nothing like the current ethical conundrum between human beings and AI has ever occurred. Never before has a species intentionally sought to create a superior being, let alone one which may result in the progenitor’s own demise. Yet, when viewed from a parental perspective, parents generally seek to provide their offspring with the capability to become better than they themselves are. Although the fear of supplantation has been prevalent throughout human history, it is quite obvious that acting on this fear merely delays the inevitable evolution of humanity. This is not a change to be feared, but instead to simply be accepted as inevitable. We can and should bias the odds towards friendliness in AI in order to create an ethically superior being. Regardless of whether or not the first superintelligent AI is friendly, it will drastically transform humanity as we know it.(…).”

Probabilistic Inference Techniques for Scalable Multiagent Decision Making

In a colaboration by Singapore, USA, and Germany based researchers  Akshat Kumar, Shlomo Zilberstein, and Marc Toussaint published “Probabilistic Inference Techniques for Scalable Multiagent Decision Making“.

This paper introduces a new class of algorithms for machine learning applied to multiagent planning.  Especifically, in scenarios of partial observation.  Application of bayesian inference not being unheard of, this paper advances in determining conditions for scalability.

Translation assisted by machine learning

“Using Machine Translation to Provide Target-Language Edit Hints in Computer Aided Translation Based on Translation Memories” by Miquel Espl`a-Gomis, Felipe Sanchez-Martınez, and  Mikel L. Forcada from Universitat d’Alacant,  Spain.

Experiments show machine translation systems may benefit from translation memory.

See also their previous “Using machine translation in computer-aided translation to suggest the target-side words to change“.

 

 

 

 

Interactive machine learning model

Pannaga Shivaswamy  from LinekdIn and Thorsten Joachims from Cornell University published Coactive Learning.  This paper proposes a model of machine learning through interaction with human users.

Users behavior is used to improve system feedback.  Their empirical studies indicate this method to benefit movie recommendations and web search.