Are we fostering A.I. that will be compassionate of ourselves?

excerpts from “Friendly Artificial Intelligence: Parenthood and the Fear of Supplantation”  by Chase Uy, at Ethical Technology

“…Much of the discourse regarding the hypothetical creation of artificial intelligence often views AI as a tool for the betterment of humankind—a servant to man. (…) These papers often discuss creating an ethical being yet fail to acknowledge the ethical treatment of artificial intelligence at the hands of its creators. (…)

Superintelligence is inherently unpredictable (…) that does not mean ethicists and programmers today cannot do anything to bias the odds towards a human­friendly AI; like a child, we can teach it to behave more ethically than we do. Ben Goertzel and Joel Pitt discuss the topic of ethical AI development in their 2012 paper, “Nine Ways to Bias the Open­Source AGI Toward Friendliness.” (…)

Goertzel and Pitt propose that the AGI must have the same faculties, or modes of communication and memory types, that humans have in order to acquire ethical knowledge. These include episodic memory (the assessment of an ethical situation based on prior experience); sensorimotor memory (the understanding of another’s feelings by mirroring them); declarative memory (rational ethical judgement); procedural memory (learning to do what is right by imitation and reinforcement); attentional memory (understanding patterns in order to pay attention to ethical considerations at appropriate times); intentional memory (ethical management of one’s own goals and motivations) (Goertzel & Pitt 7).

The idea that an AI must have some form of sensory functions and an environment to interact with is also discussed by James Hughes in his 2011 book, Robot Ethics, in the chapter, “Compassionate AI and Selfless Robots: A Buddhist Approach”. (…) This method proposes that in order for a truly compassionate AI to exist, it must go through a state of suffering and, ultimately, self­transcendance.

(…)

Isaac Asimov’s Three Laws of Robotics are often brought up in discussions about ways to constrain AI.(…). The problematic nature of these laws (…) allow for the abuse of robots, they are morally unacceptable (Anderson & Anderson 233). (…)  It does not make sense to have the goal of creating an ethically superior being while giving it less functional freedom than humans.

From an evolutionary perspective, nothing like the current ethical conundrum between human beings and AI has ever occurred. Never before has a species intentionally sought to create a superior being, let alone one which may result in the progenitor’s own demise. Yet, when viewed from a parental perspective, parents generally seek to provide their offspring with the capability to become better than they themselves are. Although the fear of supplantation has been prevalent throughout human history, it is quite obvious that acting on this fear merely delays the inevitable evolution of humanity. This is not a change to be feared, but instead to simply be accepted as inevitable. We can and should bias the odds towards friendliness in AI in order to create an ethically superior being. Regardless of whether or not the first superintelligent AI is friendly, it will drastically transform humanity as we know it.(…).”

Probabilistic Inference Techniques for Scalable Multiagent Decision Making

In a colaboration by Singapore, USA, and Germany based researchers  Akshat Kumar, Shlomo Zilberstein, and Marc Toussaint published “Probabilistic Inference Techniques for Scalable Multiagent Decision Making“.

This paper introduces a new class of algorithms for machine learning applied to multiagent planning.  Especifically, in scenarios of partial observation.  Application of bayesian inference not being unheard of, this paper advances in determining conditions for scalability.

Translation assisted by machine learning

“Using Machine Translation to Provide Target-Language Edit Hints in Computer Aided Translation Based on Translation Memories” by Miquel Espl`a-Gomis, Felipe Sanchez-Martınez, and  Mikel L. Forcada from Universitat d’Alacant,  Spain.

Experiments show machine translation systems may benefit from translation memory.

See also their previous “Using machine translation in computer-aided translation to suggest the target-side words to change“.

 

 

 

 

Interactive machine learning model

Pannaga Shivaswamy  from LinekdIn and Thorsten Joachims from Cornell University published Coactive Learning.  This paper proposes a model of machine learning through interaction with human users.

Users behavior is used to improve system feedback.  Their empirical studies indicate this method to benefit movie recommendations and web search.