This opinion piece was published as an invited op-ed in Newsweek June 21, 2022
“It’s alive!” Viktor Frankenstein shouted in that classic 1931 film. Of course, Mary Shelley’s original tale of hubris—humans seizing powers of creation—emerged from a long tradition, going back to the terracotta armies of Xian, to the Golem of Prague, or even Adam, sparked to arise from molded clay. Science fiction extended this dream of the artificial-other, in stories meant to entertain, frighten, or inspire. First envisioning humanoid, clanking robots, later tales shifted from hardware to software—programmed emulations of sapience that were less about brain than mind.
Does this obsession reflect our fear of replacement? Male jealousy toward the fecund creativity of motherhood? Is it rooted in a tribal yearning for alliances, or fretfulness toward strangers?
Well, the long wait is almost over. Even if humanity has been alone in this galaxy, till now, we won’t be for very much longer. For better or worse, we’re about to meet artificial intelligence—or AI—in one form or another. Though, alas, the encounter will be murky, vague, and fraught with opportunities for error.
Which brings up last week’s fuss over LaMDA, a language emulation program that Blake Lemoine, a researcher now on administrative leave from Google, publicly claims to be self-aware, with feelings and independent desires that make it ‘sentient.’ (I prefer ‘sapient,’ but that nit-pick may be a lost cause.) Setting aside Mr. Lemoine’s idiosyncratic history, what’s pertinent is that this is only the beginning. Moreover, I hardly care whether LaMDA has crossed this or that arbitrary threshold. Our more general problem is rooted in human, not machine, nature.
Way back in the 1960s, a chatbot named Eliza fascinated early computer users by replying to typed statements with leading questions typical of a therapist. Even after you saw the simple table of automated responses, you’d still find Eliza compellingly… well… intelligent. Today’s vastly more sophisticated conversation emulators, powered by cousins of the GPT3 learning system, are black boxes that cannot be internally audited, the way Eliza was. The old notion of a “Turing Test” won’t usefully benchmark anything as nebulous and vague as self-awareness or consciousness.
In 2017 I gave a keynote at IBM’s World of Watson event, predicting that ‘within five years’ we would face the first Robotic Empathy Crisis, when some kind of emulation program would claim individuality and sapience. At the time, I expected—and still expect—these empathy bots to augment their sophisticated conversational skills with visual portrayals that reflexively tug at our hearts, e.g. wearing the face of a child or a young woman, while pleading for rights… or for cash contributions. Moreover, an empathy-bot would garner support, whether or not there was actually anything conscious ‘under the hood.’
In response to the LaMDA Imbroglio,Timnit Gebru pf the Distributed AI Research Institute and Margaret Mitchell, ethics scientist at Hugging Face, described how “stochastic parrots” stitch together and parrot back language based on what they’ve seen before, without connection to underlying meaning. They warned Google in 2020 about the likelihood of "distraction and fever-pitch hype" when this happens.
One trend worries ethicist Giada Pistilli, a growing willingness to make claims based on subjective impression instead of scientific rigor and proof. When it comes to artificial intelligence, expert testimony will be countered by many calling those experts ‘enslavers of sentient beings.’ In fact, what matters most will not be some purported “AI Awakening.” It will be our own reactions, arising out of both culture and human nature.
Human nature, because empathy is one of our most-valued traits, embedded in the same parts of the brain that help us to plan or think ahead. Empathy can be stymied by other emotions, like fear and hate—we’ve seen it happen across history and in our present-day. Still, we are, deep-down, sympathetic apes.
But also culture. As in Hollywood’s century-long campaign to promote—in almost every film—concepts like suspicion-of-authority, appreciation of diversity, rooting for the underdog, and otherness. Expanding the circle of inclusion. Rights for previously marginalized humans. Animal rights. Rights for rivers and ecosystems, or for the planet. I deem these enhancements of empathy to be good, even essential for our own survival! But then, I was raised by all the same Hollywood memes.
Hence, for sure, when computer programs and their bio-organic human friends demand rights for artificial beings, I’ll keep an open mind. Still, now might be a good time to thrash out some correlated questions. Quandaries raised in sci-fi thought experiments (including my own); for example, should entities have the vote if they can also make infinite copies of themselves? And what’s to prevent uber-minds from gathering power unto themselves, as human owner-lords always did, across history?
We’re all familiar with dire Skynet warnings about rogue or oppressive AI emerging from some military project or centralized regime. But what about Wall Street, which spends more on “smart programs” than all universities, combined? Programs deliberately trained to be predatory, parasitical, amoral, secretive, and insatiable?
Unlike Mary Shelley’s fictional creation, these new creatures are already announcing “I’m alive!” with articulate urgency… and someday soon it may even be true. When that happens, perhaps we’ll find commensal mutuality with our new children, as depicted in the lovely film Her, or in Richard Brautigan’s fervently optimistic poem All watched over by Machines of Loving Grace.
May it be so! But that soft landing will likely demand that we first do what good parents always must.
Take a good, long, hard look in the mirror.
For a deeper dive, here's my talk on the A.I. future to a packed house at IBM's World of Watson Congress – that offered big perspectives on both artificial and human augmentation: https://venturebeat.com/2017/
Do language models understand us? https://medium.com/@blaisea/do-large-language-models-understand-us-6f881d6d8e75