Are humans merely collections of molecules? Are thoughts simply sets of chemical reactions? If human nature is ultimately reduced to these barest components, what does it mean to be human?
“I think, therefore I am” — In scientific terms, this question demonstrates existence, not so much being. Existence pertains to the fact that some chemicals are reacting in ones head (“I think, therefore I am a thinking being”); being pertains to the value of that fact (“I think, therefore I am a unique being”). Science, the art of reducing all matter to objective analysis, contemplates nothing more than what can be physically observed. Anything that falls outside the scope of observation either doesn’t exist (it’s entirely a product of the imagination — some scientists suggest the soul is unreal) or else it exists only in theory (yes, it likely exists but the data does not yet prove it). Being human, as something entirely unique in the observable world, is a philosophical judgment, not a scientifically attainable fact.
Human beings are made up of molecules. So are computers. Humans respond to stimuli. So do computers. Humans store memories. So do computers. At present, the significant difference between humans and computers is that computers are so utterly primitive. Nevertheless, we anticipate an age when computers will be faster and smarter than humans. We anticipate a time when computers will be like humans… and perhaps even more human than humans.
Such a notion may seem extraordinary to the average reader, even extravagant, but scientists have long pondered the significance of artificial intelligence (AI) to humans. From the classic Terminator movies, which envision robot-computers annihilating the world, to 2001: A Space Odyssey, which portrays computers as more human than human*, people have long acknowledged that advances in artificial intelligence raise significant questions about humanness.
If being human is simply a series of chemical reactions, what would being human signify if computers could entirely imitate that series of chemical reactions, perhaps exceed them? Will humans become less than human?
Such questions might appear merely philosophical, but there is more to it. When we heard that, genetically speaking, we are closely related to chimpanzees and that differences between humans and chimpanzees are mathematically negligible, people questioned whether there was anything special about being human. This matter had especial importance among religious conservatives. (Now, scientists are exploring the possibility that it is not merely DNA that makes a creature unique, but the stuff between the DNA — in essence, perhaps we are more radically different than chimpanzees than we previously allowed.) These questions are not merely philosophical. They suggest very real consequences — both to humans and computers.
For humans, there is yet again another reason to doubt our significance as beings. Not only can our intellectual contribution be matched, it can be exceeded. During the Industrial Revolution, machines challenged the relevancy of human labor, and, in truth, machines have largely replaced human labor. Eventually, robots will eliminate even minor efforts of labor. The laborer will truly die. (So long, John Henry.) In this modern age, computers challenge the relevancy of intellectual labor. Already, schools are experimenting with computer programs that grade the content of essays — not merely spelling and grammar, but content. Will there come a time when teachers are no longer necessary? More importantly, will there come a time when humans are no longer necessary?
(Frankly, I wonder that if computers can grade essays, can they not also write essays? Perhaps we don’t need students, either.)
For computers, then, there rises the question of computer rights. If computers/robots can be made such that they constitute intelligent beings, would they not also have rights? The day when computers rise to such a level of sophistication is distant, but that day is not so distant as it once seemed.
(Theologically, the rise of intelligent computers poses another set of questions. Will computers believe in a supreme being? Will computers need salvation? Will the stain of original sin be upon them?)
At some point in these musings, it probably would be appropriate to explain my principal issue. Where am I going with all this? Actually, I really don’t know, and I believe large segments of society really don’t care. George Orwell envisioned a society that contented itself to be entertained (cf. 1984), a society that did not question the larger issues of life, nor even contemplate the existence of these questions. Today, we are actually entertaining ourselves out of relevance (simply view YouTube’s “trending” videos — there is no greater collection of banality to be found anywhere!). Even if we do begin to question these advances in technology, they will probably be fast upon us before we realize there is even a question to be asked. As with many social changes, we will probably explore them as historical realities, not current events.
However history advances, humans will ultimately re-examine questions about existence and the soul. While the scientist might not be able to examine the non-material side of human life, people will. Perhaps this will spark a resurgence of religious fervor. Perhaps humans will reassert their humaness, their distinctness — in other words, human exceptionalism. If it ever arises that computers question their significance (a sure sign of intelligence), hopefully humans will have done so beforehand.
* Nicolas Carr presents this idea in The Shallows: What the Internet Is Doing to our Brains.
© 2013, Mark Adams. All rights reserved.