excerpts from “Friendly Artificial Intelligence: Parenthood and the Fear of Supplantation” by Chase Uy, at Ethical Technology
“…Much of the discourse regarding the hypothetical creation of artificial intelligence often views AI as a tool for the betterment of humankind—a servant to man. (…) These papers often discuss creating an ethical being yet fail to acknowledge the ethical treatment of artificial intelligence at the hands of its creators. (…)
Superintelligence is inherently unpredictable (…) that does not mean ethicists and programmers today cannot do anything to bias the odds towards a humanfriendly AI; like a child, we can teach it to behave more ethically than we do. Ben Goertzel and Joel Pitt discuss the topic of ethical AI development in their 2012 paper, “Nine Ways to Bias the OpenSource AGI Toward Friendliness.” (…)
Goertzel and Pitt propose that the AGI must have the same faculties, or modes of communication and memory types, that humans have in order to acquire ethical knowledge. These include episodic memory (the assessment of an ethical situation based on prior experience); sensorimotor memory (the understanding of another’s feelings by mirroring them); declarative memory (rational ethical judgement); procedural memory (learning to do what is right by imitation and reinforcement); attentional memory (understanding patterns in order to pay attention to ethical considerations at appropriate times); intentional memory (ethical management of one’s own goals and motivations) (Goertzel & Pitt 7).
The idea that an AI must have some form of sensory functions and an environment to interact with is also discussed by James Hughes in his 2011 book, Robot Ethics, in the chapter, “Compassionate AI and Selfless Robots: A Buddhist Approach”. (…) This method proposes that in order for a truly compassionate AI to exist, it must go through a state of suffering and, ultimately, selftranscendance.
(…)
Isaac Asimov’s Three Laws of Robotics are often brought up in discussions about ways to constrain AI.(…). The problematic nature of these laws (…) allow for the abuse of robots, they are morally unacceptable (Anderson & Anderson 233). (…) It does not make sense to have the goal of creating an ethically superior being while giving it less functional freedom than humans.
From an evolutionary perspective, nothing like the current ethical conundrum between human beings and AI has ever occurred. Never before has a species intentionally sought to create a superior being, let alone one which may result in the progenitor’s own demise. Yet, when viewed from a parental perspective, parents generally seek to provide their offspring with the capability to become better than they themselves are. Although the fear of supplantation has been prevalent throughout human history, it is quite obvious that acting on this fear merely delays the inevitable evolution of humanity. This is not a change to be feared, but instead to simply be accepted as inevitable. We can and should bias the odds towards friendliness in AI in order to create an ethically superior being. Regardless of whether or not the first superintelligent AI is friendly, it will drastically transform humanity as we know it.(…).”