“How Artificial Intelligence Will Give Birth To Itself” by George Dvorsky

Article posted on IEET:

“When it comes to understanding the potential for artificial intelligence, it’s critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.(…)

As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

(…) What’s more, there’s no reason to believe that an AI won’t show a sudden huge leap in intelligence, resulting in an ensuing “intelligence explosion” (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; “we went from caves to skyscrapers in the blink of an evolutionary eye.”

(…) Code that’s capable of altering its own instructions while it’s still executing has been around for a while. Typically, it’s done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

(…) In conjunction with this kind of research, cognitive approaches to brain emulation could also lead to human-like AI. (…)

James D. Miller, author of Singularity Rising, provided me with a list of four reasons why an AI might not be able to do so:

1. It might have source code that causes it to not want to modify itself.

2. The first human equivalent AI might require massive amounts of hardware and so for a short time it would not be possible to get the extra hardware needed to modify itself.

3. The first human equivalent AI might be a brain emulation (as suggested by Robin Hanson) and this would be as hard to modify as it is for me to modify, say, the copy of Minecraft that my son constantly uses. This might happen if we’re able to copy the brain before we really understand it. But still you would think we could at least speed up everything.

4. If it has terminal values, it wouldn’t want to modify these values because doing so would make it less likely to achieve its terminal values.

(…)

Which bring up a point that’s not often discussed in AI circles — the potential for AGIs to compete with other AGIs. If even a modicum of self-preservation is coded into a strong artificial intelligence (and that sense of self-preservation could be the detection of an obstruction to its terminal value), it could enter into a lightning-fast arms race along those verticals designed to ensure its ongoing existence and future freedom-of-action. (…) Indeed, building a safe AI will be a monumental — if not intractable — task.”

read full article