Then there was the special headline: 12 Events That Will Change Everything (and not in the ways you think). What insights will these learned men and women give us about the state of AI, I wondered? And then, to my shame, the question was answered. By Will Wright. Apparently, in this context he is counted as being on equal footing with engineers, computer experts, neuroscientists and logicians. He's a futurist, after all, just like them.
The following is a transcript of the latter half of the article. If I should suffer, at least I will not do so alone.
Thanks, Will. You can stop peeing any time you like.Scientific American, June 2010 wrote:In other words, [Will] Wright notes, self-awareness leads to self-replication leads to better machines made without humans involved. "Personally, I've always been a lot more scared of this scenario than a lot of others" in regard to the fate of humanity, he says. "This could happen in our lifetime. And once we're sharing the planet with some form of superintelligence, all bets are off."
Not everyone is so pessimistic. After all, machines follow the logic of their programming, and if this programming is done properly, [Selmer] Bringsjord says, "the machine isn't going to get some supernatural power." One area of concern, he notes, would be the introduction of enhanced machine intelligence to a weapon or fighting machine behind the scenes, where no one can keep tabs on it. Other than that, "I would say we could control the future" by responsible uses of AI, Bringsjord says.
This emergence of more intelligent AI won't come on "like an alien invasion of machines to replace us," agrees futurist and prominent author Ray Kurtzweil. Machines, he says, will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans' ability to control or even understand them, he adds.
The legal implications of machines that operate outside of humanity's control are unclear, so "it's probably a good idea to think about these things," [Hod] Lipson says. Ethical rules such as the late Isaac Asimov's "three laws of robotics" - which, essentially, hold that a robot may not injure a human or allow a human to be injured - become difficult to obey once robots begin programming one another, removing human input. Asimov's laws "assume that you program the robot," Lipson says.
Others, however, wonder if people should even govern this new breed of AI."Who says that evolution isn't supposed to go this way?" Wright asks. "Should the dinosaurs have legislated that the mammals not grow bigger and take over more of the planet?" If control turns out to be impossible, let's hope we can peaceably share the planet with our silicon-based companions."