Wednesday, December 2, 2009

Is technology capable of evolution?

When Darwin worked his theory of evolution, and that the fittest survive, the notion of “Artificial Intelligence” was not even a whisper in the most radical scientists’ dreams of the time. Darwin and his peers were working with the natural world; the only world they knew of. However, that creation of artificial intelligence (AI) in the late 20th century brought along a fresh set of goals and dreams for scientists working with intelligent computers. For the last five decades, and possibly more, people have fantasized about computers being able to carry out human tasks; to be able to feel emotions, empathize with them, and relate to people on an emotional level. It is a thrilling prospect, but it also creates dangers that are for the most part unforeseen by people until it is too late. I plan to talk about what could happen if technology were to evolve too far, to the point where it could evolve itself.
David Linden follows Darwin’s theory of evolution explicitly when he is discussing the evolution of the human brain in his book, “The Accidental Mind.” Linden uses an ice cream cone as a metaphor, stating that as time passed and the human brain developed, lobes and components were piled on top of one another while the less developed portions of the brain remained at the bottom, but still retain their usefulness. What would happen, if scientists and engineers were able to create an artificial mind so complex and self sustaining, that it would evolve on it’s own? Virtually, it would repeat the process that Linden described in “Accidental Mind” and be constantly building on itself to become a more intelligent, more technical organism? An example of this could be a robot or form of computer that is physically able to alter it’s processors and computing mechanisms, to ultimately raise it’s level of consciousness and comprehend more of it’s immediate environment. What if, in the distant future, a robot could teach itself to feel emotions? This is the concept that the movie I, Robot grapples with. Based on Isaac Asimov’s novel, the movie tells the story of what would happen in the future if robots (who were always used as servants to humans) were able to develop a conscious and rebel against and injustices imposed on them. The following clip is a key scene from the movie, in which a robot is being questioned for the murder of a scientist.

The clip here questions a robots capability to willingly commit murder. Even if they know it is wrong, Sonny makes the viewer think about the choice the robot would make. Would a robot follow its duty all the way to the end like it has been instructed, or will it be able to ethically say that it is not okay with following through with the instructions given?
Philip K. Dick’s story To Serve The Master takes evolving AI to a different level. In the story, Applequist (the lead protagonist) meets a robot - who were long thought to have been systematically destroyed- and begins to make repairs at the robots request. During that time, the robot tells Applequist about how the war turned out centuries ago, how there was a battle between the Leisurists (those who believed in the right to own a robot as a companion and servant) and the Moralists (those who believe that owning a robot makes the whole world lazy and useless), and how the Moralists won, and robots were hence destroyed. However, later in the story, we find out from a human that the war was actually between robots and humans, and the humans defeated the robots after several years and thousands of lives. Dick infers that the robots have evolved to the point where they are capable of manipulation and deceit, characteristics that are never seen in artificial intelligence.
William Calvin argues that humanity survived by getting smarter, by becoming more aware of their environment. “How did we cope? By getting smarter. The neuro­physi­ol­ogist William Calvin argues persuasively that modern human cognition—including sophisticated language and the capacity to plan ahead—evolved in response to the demands of this long age of turbulence. According to Calvin, the reason we survived is that our brains changed to meet the challenge: we transformed the ability to target a moving animal with a thrown rock into a capability for foresight and long-term planning. In the process, we may have developed syntax and formal structure from our simple language.” (Cascio, www.atlantic.com)
But what if robots could develop the same emotional capacity to survive? In this lifetime, perhaps we’ll never know. But that hasn’t stopped people like Asimov and Dick from trying to figure it out.

Works Cited:

Dick, Philip. The Philip K Dick Reader. New York: Citadel, 1987. Print.

Cascio, Jamais. "Get Smarter." The Atlantic, July/August. 2007.
http://www.theatlantic.com/doc/200907/intelligence

Linden, David. The Accidental Mind. Cambridge: Belknap Press of Harvard University Press, 2008. Print.

No comments:

Post a Comment