≡ Menu

May the 4th be with you

Among my favorite things of the late 1970s is The Story of Star Wars. I’ve lately purchased the recording on 8-track, and my 2-XL will soon be playing Star Wars!


Rethinking what it means to be human

Are humans merely collections of molecules? Are thoughts simply sets of chemical reactions? If human nature is ultimately reduced to these barest components, what does it mean to be human?

“I think, therefore I am” — In scientific terms, this question demonstrates existence, not so much being. Existence pertains to the fact that some chemicals are reacting in ones head (“I think, therefore I am a thinking being”); being pertains to the value of that fact (“I think, therefore I am a unique being”). Science, the art of reducing all matter to objective analysis, contemplates nothing more than what can be physically observed. Anything that falls outside the scope of observation either doesn’t exist (it’s entirely a product of the imagination — some scientists suggest the soul is unreal) or else it exists only in theory (yes, it likely exists but the data does not yet prove it). Being human, as something entirely unique in the observable world, is a philosophical judgment, not a scientifically attainable fact.

Human beings are made up of molecules. So are computers. Humans respond to stimuli. So do computers. Humans store memories. So do computers. At present, the significant difference between humans and computers is that computers are so utterly primitive. Nevertheless, we anticipate an age when computers will be faster and smarter than humans. We anticipate a time when computers will be like humans… and perhaps even more human than humans.

Such a notion may seem extraordinary to the average reader, even extravagant, but scientists have long pondered the significance of artificial intelligence (AI) to humans. From the classic Terminator movies, which envision robot-computers annihilating the world, to 2001: A Space Odyssey, which portrays computers as more human than human*, people have long acknowledged that advances in artificial intelligence raise significant questions about humanness.

If being human is simply a series of chemical reactions, what would being human signify if computers could entirely imitate that series of chemical reactions, perhaps exceed them? Will humans become less than human?

Such questions might appear merely philosophical, but there is more to it. When we heard that, genetically speaking, we are closely related to chimpanzees and that differences between humans and chimpanzees are mathematically negligible, people questioned whether there was anything special about being human. This matter had especial importance among religious conservatives. (Now, scientists are exploring the possibility that it is not merely DNA that makes a creature unique, but the stuff between the DNA — in essence, perhaps we are more radically different than chimpanzees than we previously allowed.) These questions are not merely philosophical. They suggest very real consequences — both to humans and computers.

For humans, there is yet again another reason to doubt our significance as beings.  Not only can our intellectual contribution be matched, it can be exceeded. During the Industrial Revolution, machines challenged the relevancy of human labor, and, in truth, machines have largely replaced human labor. Eventually, robots will eliminate even minor efforts of labor. The laborer will truly die. (So long, John Henry.) In this modern age, computers challenge the relevancy of intellectual labor. Already, schools are experimenting with computer programs that grade the content of essays — not merely spelling and grammar, but content. Will there come a time when teachers are no longer necessary? More importantly, will there come a time when humans are no longer necessary?

(Frankly, I wonder that if computers can grade essays, can they not also write essays? Perhaps we don’t need students, either.)

For computers, then, there rises the question of computer rights. If computers/robots can be made such that they constitute intelligent beings, would they not also have rights? The day when computers rise to such a level of sophistication is distant, but that day is not so distant as it once seemed.

(Theologically, the rise of intelligent computers poses another set of questions. Will computers believe in a supreme being? Will computers need salvation? Will the stain of original sin be upon them?)

Concluding Remarks

At some point in these musings, it probably would be appropriate to explain my principal issue. Where am I going with all this? Actually, I really don’t know, and I believe large segments of society really don’t care. George Orwell envisioned a society that contented itself to be entertained (cf. 1984), a society that did not question the larger issues of life, nor even contemplate the existence of these questions. Today, we are actually entertaining ourselves out of relevance (simply view YouTube’s “trending” videos — there is no greater collection of banality to be found anywhere!). Even if we do begin to question these advances in technology, they will probably be fast upon us before we realize there is even a question to be asked. As with many social changes, we will probably explore them as historical realities, not current events.

However history advances, humans will ultimately re-examine questions about existence and the soul. While the scientist might not be able to examine the non-material side of human life, people will. Perhaps this will spark a resurgence of religious fervor. Perhaps humans will reassert their humaness, their distinctness — in other words, human exceptionalism. If it ever arises that computers question their significance (a sure sign of intelligence), hopefully humans will have done so beforehand.


* Nicolas Carr presents this idea in The Shallows: What the Internet Is Doing to our Brains.


Into the cave again

Note the experiences of the people in the commercial.

2-XL is my muse. Though I delight in reminiscing about this toy, my real interest lies in reflecting upon the impact of technology in our lives. When I look at my 2-XL sitting in my den, thoughts are stirred. Could technology have had a different impact in our lives? Could technology have taken a different course? Might it still? (I’ve stated this before, 2-XL anticipated a very different course, one with more personality — personalness.)

The negative impacts of technology are well-documented. Nicholas Carr writes in The Shallows: What the Internet is Doing to our Brains that technology is adversely changing the way we think. The Internet is making us dumber, we are less interesting, we are more disconnected. I’m inclined to agree with this assessment. Though the Internet has enriched many aspect of our lives (I am blogging after all), it has totally consumed us and altered our social patterns.

Here’s an observation: most of our interaction with technology involves staring at a screen. We are glued to our phones, our computers, our televisions. The focal point of our attention is a flat surface. This fosters impersonality, shallowness. Not that I expect computers to be living beings, but I don’t want computers to be drawing us away from real, living experiences — from other people. One might have several hundred friends on Facebook, but how many of those relationships are real? Have friends become avatars?

Playing with 2-XL as a child was a very social experience. I remember my brother and I pushing buttons, laughing at jokes, and being very much delighted by the experience. Though 2-XL was merely a cleverly conceived 8-track player, he gave the impression, not only of an intelligent being, but a social being. That may well be because Dr. Michael Freeman, the inventor of 2-XL, was the voice and personality of the “talking robot.” Children were only one step away from a person. Because 2-XL successfully gave the impression of personality, playing with him became a very social experience.

Freeman is an important figure because championed the use of technology in the classroom in a age when its potential was not fully understood. He found ingenious ways to modernize the classroom, using existing technology. Some might find it strange to regard 2-XL as a significant advancement, but this “talking robot” actually provided meaningful instruction.

Today, computers (and even more rapidly, smartphones) are being employed in classrooms. But what is the result? Impersonality. We are self-absorbed. We see only what is before us, and little else. We experience little, understand little. Do computers and smartphones foster social interaction?

I do not imagine that computers will change very dramatically, but our experience must. I suggest that 2-XL, though primitive in design, suggests a different way to interact with technology. First, his programs were less visual, and more oral. This oral element fostered imagination, more active thinking. Second, because 2-XL was a less visual experience, 2-XL fostered social interaction. Users were less dependent on him, more dependent on themselves.

I wonder what the classroom might look like today, even the technology we employed fostered social interaction.


First came Alphie

Playskool AlphieBefore 2-XL, there was Alphie — but only by a few months. Both were introduced in 1978, marking the beginning of the age of the electronic, educational toy. Though different in design, the two are comparable, each employing limited technology to simulate a computerized experience. Alphie even boasts of being programmed by a computer (I haven’t torn mine apart yet to see how he runs, but I suppose there is circuitry to be found in Alphie).

The following is an excerpt from Toys and American Culture, An Encyclopedia

In 1978, Playskool introduced a toy called Alphie. The smiling robot was meant as a learning tool for young children. With the help of card inserts, Alphie introduced kids to colors, numbers, and the alphabet. Although Alphie could not move or change his facial expression, children enjoyed the fact that he played music and interactive guessing games. Alphie is considered America’s first electronic preschool toy. — (p. 263)

It must be noted that 2-XL was a far more ambitious toy, offering a richer, more complex experience. That 2-XL actually spoke put him ahead of any other toy in his class, including Alphie, which offered simplistic (but certainly age-appropriate) challenges. 2-XL was a much more expensive toy, priced between $50 and $70. Alphie sold for considerably less.

See Alphie in action:

Alphie is still in production today —

Playskool Alphie today

Note: While Alphie was introduced months prior to 2-XL, 2-XL had been in development years before Alphie. Dr. Michael Freeman, who invented 2-XL, had previously created Leachim, an educational robot that was used in a New York City classroom in the early 1970s.

Alphie Advertisements:

Alphie Ad

Playschool Oct 1979

Playskool Alphie


Hello world!

2-XL Talking Robot2-XL.net is a tribute to that popular educational toy of the late 1970s. Actually, I’m astonished that there is only one other dedicated 2-XL site — 2XLRobot.com — and only a handful of pages for this toy. Hundreds of thousands were sold between 1978 and 1981, and I expected more discussion about him. Has the personal computer so completely overshadowed 2-XL’s achievements that he is forgotten? Perhaps so.

In the mid-1980s my parents bought an Atari 400, our first family computer, and 2-XL was shelved (actually, he was given to my younger cousin who enjoyed the toy as much as I did). Later, in 1989, I purchased a Mac Plus, and the memory of 2-XL faded entirely. I imagine the same is true for others. Advances in digital technology had rendered 2-XL quaint.

In a certain sense, 2-XL was a “prototype” of things to come, an anticipation of what computers could be and perhaps should be. Because in 1978 technology had not advanced enough to place a real computer in everyone’s home, 2-XL filled the gap. It’s for this “filling of the gap” that I offer this site.