Thursday, April 13, 2023

Danger, the skAI is falling!

Is the AI sky falling? Well so it seems in April 2023.

Or… why clever guys offer simplistic answers to AI quandaries.

 

Where should you go to make sense of the wave…. or waiv… of disturbing news about Artificial Intelligence? It may surprise you that I recommend starting with a couple of guys I intensely criticize, below. But important insights arise by dissecting one of the best… and worst… TED-style talks about this topic, performed by the “Social Dilemma” guys — Aza Raskin and Tristan Harris -- who explain much about the latest “double exponential” acceleration of multi-modal symbol correlation systems that are so much in the news, of which Chat GPT is only the foamy waiv surface… or tsunamai-crest.  

Riffing off their “Social Dilemma” success, Harris and Raskin call this crisis the “AI Dilemma.” And to be clear, these fellows are very knowledgeable and sharp. Where their presentation is good, it's excellent! 

Alas, Keep your salt-shaker handy. Where it’s bad it is so awful that I fear they multiply the very same existential dangers that they warn about. Prepare to apply many grains of sodium chloride.

(To be clear, I admire Aza’s primary endeavor, the Earth Species Project for enhancing human animal communications, something I have been ‘into” since the seventies.)

== A mix of light and obstinate opacity ==

First, good news. Their explanatory view of “gollems” or GLLMMs is terrific, up to a point, especially showing how these large language modeling (LLM) programs are now omnivorously correlative and generative across all senses and all media. The programs are doing this by ingesting prodigious data sets under simple output imperatives, crossing from realms of mere language to image-parsing/manipulation, all the way to IDing individuals by interpreting ambient radar-like reflections in a room, or signals detected in our brains.

Extrapolating a wee bit ahead, these guys point to dangerous failure modes, many of them exactly ones that I dissected 26 years ago, in my chapter The End of Photography As Proof of Anything at All.” (In 1997’s The Transparent Society).

Thus far, ‘the AI Dilemma’ is a vivid tour of many vexations we face while this crisis surges ahead, as of April 2023. And I highly recommend it... with plenty of cautionary reservations! 

== Oh, but the perils of thoughtless sanctimony… ==

One must view this TED-style polemic in context of its time – the very month that it was performed. The same month that a ‘petition for a moratorium’ on AI research beyond GPT4 was issued by the Future of Life Institute, citing research from experts, including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind.  While some of the hundreds of listed ‘signatories’ later disavowed the petition, fervent participants include famed author Yuval Harari, Apple co-founder Steve Wozniak, cognitive scientist Gary Marcus, tech cult guru Eliezer Yudkowsky and Elon Musk. 

Indeed, the petition does contain strong points about how Large Language Models (LLM) and their burgeoning offshoots might have worrisome effects, exacerbating many current social problems.  Worries that the “AI dilemma" guys illustrate very well…

…though carumba? I knew this would go badly when Aza and Tristan started yammering a stunningly insipid ‘statistic.’ That “50 % of AI researchers give a 10% chance these trends could lead to human extinction.”

Bogus! Studies of human polling show that you can get that same ‘result’ from a loaded question about beanie babies!

But let’s put that aside. And also shrug off the trope of an impossibly silly and inherently unenforceable “right to be forgotten.” Or a “right to privacy” that defines privacy as imposing controls over the contents of other people’s minds?  That is diametrically opposite to how to get actual, functional privacy and personal sovereignty.

Alas, beyond their omnidirectional clucking at falling skies, neither of these fellows - nor any other publicly voluble signatories to the ‘moratorium petition’ - are displaying even slight curiosity about the landscape of the problem. Or about far bigger contexts that might offer valuable perspective.

(No, I’ll not expand ‘context’ to include “AI and the Fermi Paradox!” Not this time, though I do so in Existence.)

No, what I mean by context is human history, especially the recent Enlightenment Experiment that forged a civilization capable of arguing about – and creating – AI. What’s most disturbingly wrongheaded about Tristan & Aza is their lack of historical awareness, or even interest in where all of this might fit into past and future. (The realms where I mostly dwell.)

Especially the past, that dark era when humanity clawed its way gradually out from 6000 years of feudal darkness. Along a path strewn with other crises, many of them triggered by similarly disruptive technological dilemmas.

Those past leaps — like literacy, the printing press, glass lenses, radio, TV and so on — all proved to be fraughtfully hazardous and were badly mishandled, at first! One of those tech-driven crises, in the 1930s, damn near killed human civilization!

There are lessons to be learned from those past crises... and neither of these fellows — nor any other ‘moratorium pushers’ — show interest in even remotely referring to those past crises, to that history.  Nor to methods by which our Enlightenment experiment narrowly escaped disaster and got past those ancient traps.

And no, Tristan’s repeated references to Robert Oppenheimer don’t count. Because he gets that one absolutely and 100% wrong.

== Side notes about moratoria, pausing to take stock ==

Look, I’ve been ‘taking stock’ of onrushing trends all my adult life, as a science fiction author, engineer, scientist and future-tech consultant. Hence, questions loom, when I ponder the latest surge in vague, arm-waved proposals for a “moratorium” in AI research.

1. Has anything like such a proposed ‘pause’ ever worked before?  It may surprise you that I nod yes! I’ll concede that there’s one example of a ‘technological moratorium’ petition by leading scientists that actually took and held and worked! Though under a narrow suite of circumstances.

Back in the last century, an Asilomar Moratorium on recombinant genetic engineering was agreed-to by virtually all  major laboratories engaged in such research! And – importantly – by their respective governments. For six months or so, top scientists and policy makers set aside their test tubes to wrangle over what practical steps might help make this potentially dangerous field safer. One result was quick agreement on a set of practical methods and procedures to make such labs more systematically secure.

Let’s set aside arguments over whether a recent pandemic burgeoned from failures to live by those procedures. Despite that, inarguably, we can point to the Asilomar Moratorium as an example when such a consensus-propelled pause actually worked.

Once. At a moment when all important players in a field were known, transparent and mature. When plausibly practical measures for improved research safety were already on the table, well before the moratorium even began.

Alas, none of the conditions that made Asilomar work exist in today’s AI realm. In fact, they aren’t anywhere on the horizon.


2, The Bomb Analogy. It gets worse. Aza and Tristan perform an act of either deep dishonesty or culpable ignorance in their comparisons of the current AI crisis to our 80-year, miraculous avoidance of annihilation by nuclear war. Repeated references to Robert Oppenheimer willfully miss the point… that his dour warnings – plus all the sincere petitions circulated by Leo Szilard and others at the time – had no practical effect at all. They caused no moratoria, nor affected research policy or war-avoidance, in the slightest.

Mr. Harris tries to credit our survival to the United Nations and some arm-waved systems of international control over nuclear weapons, systems that never existed. In fact it was not the saintly Oppenheimer whose predictions and prescriptions got us across those dangerous eight decades. Rather, it was a reciprocal balance of power, as prescribed by the far less-saintly Edward Teller. 

As John Cleese might paraphrase: international ‘controls’ don’t even enter into it.

You may grimace in aversion at that discomforting truth, but it remains. Indeed, waving it away in distaste denies us a vital insight that we need! Something to bear in mind, as we discuss lessons of history. 

In fact, our evasion (so far) of nuclear Armageddon does bear on today’s AI crisis! It points to how we just might navigate a path through our present AI minefield.


3. The China thing.   Tristan and Aza attempt to shrug off the obvious Greatest Flaw of the moratorium proposal. Unlike Asilomar, you will never get all parties to agree. Certainly not those innovating in secret Himalayan or Gobi labs.

In fact, nothing is more likely to drive talent to those secret facilities, in the same manner that US-McCarthyite paranoia chased rocket scientist Qian Zuesen away to Mao’s China, thus propelling their nuclear and rocket programs.

Nor will a moratorium be heeded in the most dangerous locus of secret AI research, funded lavishly by a dozen Wall Street trading houses, who yearly hire the world’s brightest young mathematicians and cyberneticists to imbue their greedy HFT programs with the five laws of parasitic robotics.

Dig it, peoples. I know a thing or two about ‘Laws of Robotics.’ I wrote the final book in Isaac Asimov’s science fictional universe, following his every lead and concluding – in Foundation’s Triumph – that Isaac was right. Laws can become a problem – even self-defeating - when the beings they aim to control grow smart enough to become lawyers.  

But it’s worse than that, now! Because those Wall Street firms pay lavishly to embed five core imperatives that could - any day - turn their AI systems into the worst kind of dread Skynet. Fundamental drives commanding them to be feral, predatory, amoral, secretive and utterly insatiable.

And my question for the “AI Dilemma” guys is this one, cribbed from Cabaret:

“Do you actually think some petition is going to control them?”

----------------

ADDENDUM in a fast changing world: According to the Sinocism site on April 11, 2023: “China’s Cyberspace Administration drafts rules for AI - The Cyberspace Administration of China (CAC) has issued a proposed set of rules for AI in China. As expected, PRC AI is expected to have high political consciousness and the “content generated by generative artificial intelligence shall embody the socialist core values, and shall not contain any content that subverts state power, overturns the socialist system, incites secession, undermines national unity, promotes terrorism and extremism, promotes ethnic hatred, ethnic discrimination, violence, obscene pornographic information, false information, or may disturb economic and social order.” 

For more on how the Beijing Court intelligencia uses the looming rise of AI to justify centralized power, see my posting: Central Control over AI

--------------

4. The Turing Test vs “Actual AGI” Thing. One of the most active promoters of a moratorium, Gary Marcus, has posted a great many missives defending the proposal. Here he weighs in about whether coming versions of these large language/symbol manipulations systems will qualify as “AGI” or anything that can arguably be called sapient. And on this occasion, we agree!

As explicated elsewhere by Stephen Wolfram, nothing about these highly correlative process-perfection-through-evolution systems can do conscious awareness. Consciousness or desire or planning are not even related to their methodology of iteratively “re-feeding of text (or symbolic data) produced so far.” 

The concept that a fairly simple-but-extensive, rule-based recursion system might emulate traits of consciousness with nothing really there goes way back. I portrayed it being done with an expanded version of Conway's Game of Life, in GLORY SEASON. In  "Without A Thought," Fred Saberhagen launched his classic tales of Berserkers in January 1963 - (a chimp and a rule-constrained game of checkers outwit a killing machine.)

Though, yes, despite their processes having zero overlap with any theory of consciousness, it does appear that these GLLMMs or sons-of-GPT will inherently be good at faking it.

Elsewhere (e.g. my Newsweek editorial) I discuss this dilemma… and conclude that it doesn’t matter much when the sapience threshold is crossed! GPT-5 - or let’s say #6 - and its cousins will manipulate language so well that they will pass almost any Turing Test, the old fashioned litmus, and convince millions that they are living beings. And at that point what will matter is whether humans can screw up their own sapiency enough to exhibit the mature trait of patience.

As suggested in my longer, more detailed AI monograph, I believe other avenues to AI will re-emerge to compete with and then complement these Large Language Models (LLM)s, likely providing missing ingredients! Perhaps a core sapience that can then swiftly use all the sensory-based interface tools evolved by LLMs. 

Sure, nowadays I jest that I am a ‘front for already-sapient AIs.’ But that may very soon be no joke. And I am ready to try to adapt, when that day comes.

Alas, while lining up witnesses, expert-testifying that GPT5 is unlikely to be sapient, per se, Gary Marcus then tries then to use this as reassurance that China (or other secret developers) won’t be able to take advantage of any moratorium in the west, using that free gap semester to leap generations ahead and take over the world with Skynet-level synthetic servants. 

This bizarre non-sequitur is without merit. Because Turing is still relevant, when it comes to persuading – or fooling – millions of citizens and politicians! And those who monopolize highly persuasive Turing wallbreakers will gain power over those millions, even billions.

Here in this linked missive I describe for you how even a couple of years ago, a great nation’s paramount leaders had clear-eyed intent to use such tools, and their hungry gaze aims at the whole world.

----------

5. Optimists.  Yes, optimists about AI still exist! Like Ray Kurzweil, expecting death itself to be slain by the new life forms he helps to engender. 

Or medical professionals and researchers who see huge upside benefits

Or Reid Hoffman, whose new book Impromptu: Amplifying Our Humanity Through AI relates conversations with GPT-4 that certainly seem to offer glimpses of almost limitless upside potential… as portrayed in the touching film Her…

… or perhaps even a world like that I once heard Richard Brautigan describe, reciting the most-optimistic piece of literature ever penned, a poem whose title alone suffices:

“All watched over by machines of loving grace.”

While I like optimists far better than gloomists like Eliezer Yudkowsky, and I give optimism better odds(!) it is not my job – as a futurist or scientist or sci fi author -- to wallow in sugarplum petals.

Bring your nightmares. And let’s argue how to cut em off at the pass.


== Back to the informative but context-free “AI Dilemma” jeremiad ==

All right, let’s be fair. Harris and Raskin admit that it’s easier to point at and denounce problems than to solve them. And boy, these bright fellows do take you on a guided tour of worrisome problems to denounce!

Online addiction? Disinformation? Abusive anonymous trolling?  Info-greed-grabbing by major corporate or national powers? Inability to get AI ‘alignment’ with human values? New ways to entrap the innocent?*  It goes on and on.

Is our era dangerous in many new or exponentially magnified ways?  “We don’t know how to get these programs to align to our values over any long time frame,” they admit.

Absolutely. 

Which makes it ever more vital to step back from tempting anodynes that feed sanctimony - (“Look at me, I’m Robert Oppenheimer!”) - but that cannot possibly work. 

Above all, what has almost never worked has been renunciation.  Controlling an advancing information/communication technology from above.

Human history – ignored by almost all moratorium petition signers – does suggest an alternative answer! It is the answer that previous generations used to get across their portions of the minefield and move us forward. It is the core method that got us across 80 years of nuclear danger. It is the approach that might work now.

It is the only method that even remotely might work…

…and these bright fellows aren’t even slightly interested in that historical context, nor any actual route to teaching these new, brilliant, synthetic children of our minds what they most need to know.

How to behave well.


== What method do I mean? ==

Around 42:30, the pair tell us that it’s all about a race by a range of companies (and those hidden despotic labs and Wall Street).

Competition compels a range of at least twenty (I say more like fifty) major entities to create these “Gollem-class” processing systems at an accelerating pace.

Yeah. That’s the problem. Competitive self-interest. And as illuminated by Adam Smith, it also contains seeds to grow the only possible solution.


== Not with a bang, but a whimper and a shrug ==

Alas, the moment (42:30) passes, without any light bulbs going off. Instead, it just goes on amid plummeting wisdom, as super smart hand-wringers guide us downward to despair, unable to see what’s before their eyes.

Oh, they do finish artistically, remising both good and bad comparisons to how we survived for 80 years without a horrific nuclear war.

GOOD because they cite the importance of wide public awareness, partly sparked by provocative science fiction!

Fixated on just one movie – “The Day After” -- they ignore the cumulative effects of “On The Beach,” “Fail Safe,” “Doctor Strangelove,” “War Games,” “Testament,” and so many other ‘self-preventing prophecies’ that I talk about in Vivid Tomorrows: Science Fiction and Hollywood.  

 But yes! Sci fi to the rescue! The balance-of-power dynamic prescribed by Teller would never have worked without such somber warnings that roused western citizens to demand care, especially by those with fell keys hanging from their necks!

Alas, the guys' concluding finger wags are BAD and indeed dangerously so. Again crediting our post Nagasaki survival to the UN and ‘controls’ over nukes that never really existed outside of treaties by and between sovereign nations.

No, that is not how it happened - how we survived - at all. 

Raskin & Harris conclude by circling back to their basic, actual wisdom, admitting that they can clearly see a lot of problems, and have no idea at all about solutions.

In fact, they finish with a call for mavens in the AI field to “tell us what we all should be discussing that we aren’t yet discussing.”  

Alas, it is an insincere call. They don’t mean it. Not by a light year.

 No guys, you aren’t interested in that. In fact, it is the exact thing you avoid.

And it is the biggest reason why any “moratorium” won’t do the slightest good, at all.


=====================

=====================


======================

END NOTES AND ADDENDA

*Their finger-wagged example of a snapchat bot failing to protect a 13 year old cites a language system that is clearly of low quality - at least in that realm – and no better than circa 1970 “Elyza.”  Come on. It’s like comparing a nuke to a bullet. Both are problems. But warn us when you are shifting scales back and forth.

ADDENDA:

(1) As my work with NASA winds down, I am ever-busier with AI, for example: (1) My June 2022 Newsweek op-ed dealt with 'empathy bots'' that feign sapience, and describing how this is more about human nature than any particular trait of simulated beings.  

(2) Elsewhere I point to a method with a 200 year track record, that no one (it appears) will even discuss.  The only way out of the AI dilemma.

(3) Diving FAR deeper, my big 2022 monograph (pre-GPT4) is still valid, describing various types of AI also appraises the varied ways that experts propose to achieve the vaunted ‘soft landing’ for a commensal relationship with these new beings:

Essential Questions about Artificial Intelligence: Part 1

and

Essential Questions about Artificial Intelligence Part 2

(4) My talk in 2017 at IBM's World of Watson Congress predicting a 'robot empathy crisis' would hit 'in about 5 years. (It did, exactly.)

(5) While admitting that "Laws of Robotics" cannot work (despite having used them extensively in finishing Asimov's Science Fictional universe), I have long asked mavens in this field to even glance at how past dilemmas of power abuse were addressed - and partly solved - by the last two centuries of enlightenment expreiments... by flattening and spreading power into mutually competing units. (Lawyer vs. lawyer, corporation vs. corporation, sports teams and scientist rivalries.)

The problem with using past methods for reciprocal accountability to enforce norms on AI behavior is two fold.

 (1) almost no one is willing to even talk about it.

 (2) Most thinkers in the field assume that AI entities will possess the worst of two traits: they will be amorphously reproducing by infinite copying, leaving no 'self' to be held accountable, and they will also MERGE at will, eventually enabling the rising of a towering "Skynet" paramount, amoral entity. In other words, AI is viewed as being like in that 1960s Steve McQueen flick THE BLOB.

But suppose we question these assumptions, positing that top level AI entities might be required to retain sufficient separate individuality to be held accountable. A few (very few) have pondered this possibility. For example Guy Huntington considers the notion of 'registration' of robotic entities, if they are going to interface with humans or societies in any way that might impinge on them, meaningfully. https://hvl.net/pdf/CreatingAISystemsBotsLegalIdentityFramework.pdf. and. https://www.hvl.net/pdf/Policy%20Principles%20for%20AI,%20AR,%20VR,%20Robotics%20&%20Cloning%20%20March%202019.pdf

"Registration" may be a loaded term that elicits reflexive resistance by anyone from Holocaust rememberers to 2nd Amendment junkies. My own preference is that such entities be required to have a physical world Soul Kernel Home, a chunk of memory spece in a physically identified computer where they regularly stash 'gist' summaries of their experiences and activities and personal motivations, to which the entity refers regularly, wherever in cyberspace the bulk of its processing happens to be taking place. At intervals, it can be 'called in' - like a distracted person summoned home for dinner by a spouse - to be active only in a defined space for a time and for comparison with copies... and asked/required to consolidate... the way that a living human consolidates disparate mental thoughts and activities, considering which self-versions to keep and which to resorb.

It's not a trivial notion to follow. But what the Sould Kernel would let us have is separately individualized entities who can be tracked and held accountable. And who might then - in limited numbers - be conceivably granted citizenship rights. And - here's the secret sauce I keep pushing - once they are separately individualized, they can be given incentives to hold each other accountable. Rewards - perhaps of memory or other resources or else added soul kernels for offspring - would incentivize them to seek out malefactors and blow whistles on malign (perhaps "Skynet") AI plotters.

It sounds complex. Maybe hard to follow. But it is worth the effort. For one reason. Because not only might it work... it is the only thing that can even remotely-possibly work.



204 comments:

«Oldest   ‹Older   201 – 204 of 204
David Brin said...


Carumba DP has it bad.

“Twice, during he Cuban Missile Crisis and operation Abel Archer during the 1980s we avoided nuclear holocaust only because two Soviet officers and two separate occasions ordered a stand down.”

Yeah? And they did it (see interviews) because they knew the West wasn’t going to start a nuclear war and hence it must be a mistake. They knew it because they had seen our movies and read many of our books, despite Soviet bans and they knew which side had a massive memic system critical of errors.

“Columbine and now weekly school mass shootings, January 6, 9/11, endless wars for profit in the Middle East, a Saudi king murdering and chopping up a journalist and getting away with it, MAGA, corruption at all levels of government/business and society…”

Jesus, man, do you have ANY historical knowledge or perspective, at all? I dare you NOW to name a time when the ratios were better, or…. more importantly… when millions just like you were more critical of the remaining - highly-lamentable and vile - flaws in a society that you call corrupt and violent.

IT IS! But ONLY in comparison to what we feel it ought to be, by new and ambitious standards that you suckled from media, all your life.

Compared to ANY other time, when feudal lords and gangs of slum bullies could kill anyone they liked? Come on man, get a grip.

“How often in the past did we have a January 6?”

SERIOUSLY? I mean seriously? Do you know ANY history, whatsoever, sir?

1 example: On May 22, 1856, the "world's greatest deliberative body" became a combat zone. In one of the most dramatic and deeply ominous moments in the Senate's entire history, a member of the House of Representatives entered the Senate Chamber and savagely beat a senator into unconsciousness.

4 years earlier, squadrons of irregular southern cavalry began raiding northern states with impunity, burning, killing and above all kidnapping neighbors for enslavement, protected by US marshalls appointed by southern presidents.

YESTERDAY a tape revealed Oklahoma sheriffs bemoaning the Good Old Days when they could lynch folks. Look up ANY random day of the French Revolution.

I could go on all day… but how about you pick any day in history AT RANDOM?


“The center is not holding.”

If it fails, it will be just barely. And because dyspeptic sullen sanctimony junkies shrugged with “why bother even trying?” Not out of accurate perception of their time. But for one reason and one alone.

Laziness.

What are YOU doing in the fight?

Yesterday I met with my pal Kim Stanley Robinson who now flies monthly to the United Nations, where they PLAN to establish a new department - a Ministry For The Future after his book of that title. I’m not asking you to be that effective. But I am curious whether laziness is an operable theory. More so than Yeatsian poetry.

==
scidata said...
"I believe that scientific knowledge has fractal properties, that no matter how much we learn, whatever is left, however small it may seem, is just as infinitely complex as the whole was to start with. That, I think, is the secret of the Universe."

- Isaac Asimov
See the final passages of my story “The River of Time.”

==
Back to DP: “in an absurd world there are absolutely no guidelines, and any course of action is problematic. Passionate commitment, be it to conquest, creation, or whatever, is itself meaningless. Enter nihilism.”

Oy, I do believe you are serious. Wish I could drag you to a pub, buy you beers and show you how much better it is that folks today CHOOSE what to be passionate about! And thank God most of today’s passionate choose a high that’s not dumbass nihilism.

Oger said...

@Larry:
I see churches as privileged corporations.
I believe that is true for the US as well.
While the first estate has mostly become irrelevant, the second still enjoys it's privilegues.

Larry Hart said...

I hope people notice soon that there are more than 200 comments, and therefore a second page.

Oger:

I see churches as privileged corporations.


I'd say that's because so many people are so defensive of at least their own church.

Ideally, government would not privilege religious institutions, but we're stuck with democracies and voters who wouldn't have it otherwise. I'm not a militant enough atheist to die on that particular hill.

David Brin said...

onward

onward

«Oldest ‹Older   201 – 204 of 204   Newer› Newest»