Sunday, August 16, 2009

The Real Way to Feel Safe with Artificial Intelligence

Sorry to have posted so little, of late.  We have been ensnared by a huge and complex Eagle Scout Project here... plus another kid making Black Belt, and yet another at Screenwriting camp... then the first one showing me endless online photos of "cars it would be cool to buy..."

And so, clearing my deck of topics to rant about, I'd like to post quickly this rumination on giving rights to artificial intelligences.  Bruce Sterling has lately raised this perennial issue, as did Mike Treder in an excellent piece suggesting that our initial attitudes toward such creatures may color the entire outcome of a purported "technological singularity."

==The Real Reason to Ensure AI Rights==

51NF4AW47EL._SL500_AA300_No issue is of greater importance than ensuring that our new, quasi-intelligent creations are raised properly.  While oversimplifying terribly, Hollywood visions of future machine intelligence range from TERMINATOR-like madness to admirable traits portrayed in movies like AI or in the BICENTENNIAL MAN.

I've spoken elsewhere of one great irony -- that there is nothing new about this endeavor.  That every human generation embarks upon a similar exercise -- creating new entities that start out less intelligent and virtually helpless, but gradually transform into beings that are stronger, more capable, and sometimes more brilliant than their parents can imagine.

The difference between this older style of parenthood and the New Creation is not only that we are attempting to do all of the design de novo, with very little help from nature or evolution, but also that the pace is speeding up. It may even accelerate, once semi-intelligent computers assist in fashioning new and better successors.

Humanity is used to the older method, in which each next generation reliably includes many who rise up, better than their ancestors... while many others sink lower, even into depravity.  It all sort of balanced out (amid great pain), but henceforth we cannot afford such haphazard ratios,  from either our traditional-organic heirs or their cybernetic creche-mates.

The_Matrix_PosterI agree that our near-future politics and social norms will powerfully affect what kind of "singularity" transformation we'll get -- ranging from the dismal fears of Bill Joy and Ted Kaczynski to the fizzing fantasies of Ray Kurzweil.  But first, let me say it's not the surface politics of our useless, almost-meaningless so-called Left-vs-Right axis. Nor will it be primarily a matter of allocation of taxed resources. Except for investments in science and education and infrastructure, those are not where the main action will be.  They will not determine the difference between "good" and "bad" transcendence.  Between THE MATRIX  and, say, FOUNDATION'S TRIUMPH.

No, what I figure will be the determining issue is this.  Shall we maintain momentum and fealty to the underlying concepts of the Western Enlightenment? Concepts that run even deeper than democracy or the principle of equal rights, because they form the underlying, pragmatic basis for our entire renaissance.

==Going With What Has Already Worked==

These are, I believe, the pillars of our civilization -- the reasons that we have accomplished so much more than any other, and why we may even succeed in doing it right, when we create Neo-Humanity.

1.  We acknowledge that individual human beings  -- and also, presumably, the expected caste of neo-humans -- are inherently flawed in their subjectively biased views of the world.

In other words...  we are all delusional! Even the very best of us.  Even (despite all their protestations to the contrary) all leaders.  And even (especially) those of you out there who believe that you have it all sussed.

This is crucial. Six thousand years of history show this to be the one towering fact of human nature.  Our combination of delusion and denial is the core predicament that stymied our creative, problem-solving abilities, delaying the great flowering that we're now part-of.

These dismal traits still erupt everywhere, in all of us.  Moreover, it is especially important to assume that delusion and denial will arise, inevitably, in the new intelligent entities that we're about to create.  If we are wise parents, we will teach them to say what all good scientists are schooled to say, repeatedly: "I might be mistaken."  But that, alone, is not enough.

How-to-Create-a-Mind-cover-347x5122.  There is a solution to this curse, but it is not at all the one what was recommended by Plato, or any of the other great sages of the past.

Oh, they knew all about about the delusion problem, of course.  See Plato's "allegory of the cave," or the sayings of Buddha, or any of a myriad other sage critiques of fallible human subjectivity.  These savants were correct to point at the core problem... only then, each of them claimed that it could be solved by following their exact prescription for Right Thinking. And followers bought in, reciting or following the incantations and flattering themselves that they had a path that freed them of error.

Painfully, at great cost, we have learned that there is no such prescription. Alack, the net sum of "wisdom" that those prophets all offered only wound up fostering even more delusion.  It turns out that nothing -- no method or palliative applied by a single human mind, upon itself -- will ever accomplish the objective.

Oh, sure, logic and reason and sound habits of scientifically-informed self-doubt can help a lot.  They may cut the error rate in half, or even by a factor of a hundred!  Nevertheless, you and I are still delusional twits.  We always will be!  It is inherent.  Live with it.  Our ancestors had to live with the consequences of this inherent human curse.

Ah, but things turned out not to be hopeless, after all!  For, eventually, the Enlightenment offered a completely different way to deal with this perennial dilemma.  We (and presumably our neo-human creations) can be forced to notice, acknowledge, and sometimes even correct our favorite delusions, through one trick that lies at the heart of every Enlightenment innovation -- the processes called Reciprocal Accountability (RA).

In order to overcome denial and delusion, the Enlightenment tried something unprecedented -- doing without the gurus and sages and kings and priests.  Instead, it nurtured competitive systems in markets, democracy, science and courts, through which back and forth criticism is encouraged to flow, detecting many errors and allowing many innovations to improve.  Oh, competition isn't everything! Cooperation and generosity and ideals are clearly important parts of the process, too. But ingrained reciprocality of criticism -- inescapable by any leader -- is the core innovation.


3.  These systems -- including "checks and balances" exemplified in the U.S. Constitution -- help to prevent the kind of sole-sourcing of power, not only by old-fashioned human tyrants, but also the kind of oppression that we all fear might happen, if the Singularity were to run away, controlled by just one or a few mega-machine-minds. The nightmare scenarios portrayed in The Matrix, Terminator, or the Asimov universe.

==The Way to Ensure AI is Both Sane and Wise==

KurzweilSingularityCoverHow can we ever feel safe, in a near future dominated by powerful artificial intelligences that far outstrip our own? What force or power could possibly keep such a being, or beings, accountable?

Um, by now, isn't it obvious?

The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board.

This outcome -- almost never portrayed in fiction --  would strike us as inherently more likely to be safe and successful.  After all, isn't it today's situation?  The vast majority of citizens do not understand arcane matters of science or policy or finance.  They watch the wrangling among alphas and are reassured to see them applying accountability upon each other.... a reassurance that was betrayed by recent attempts to draw clouds of secrecy across all of our deliberative processes.

Sure, it is profoundly imperfect, and fickle citizens can be swayed by mogul-controlled media to apply their votes in unwise directions.  We sigh and shake our heads... as future AI Leaders will moan in near-despair over organic-human sovereignty.  But, if they are truly wise, they'll continue this compact.  Because the most far-seeing among them will recognize that "I might be wrong" is still the greatest thing than any mind can say.  And that we reciprocal criticism is even better.

SoYouWantToMakeGodsAlas, even those who want to keep our values strong, heading into the Singularity Age, seldom parse it down to this fundamental level.  They talk - for example - about giving AI "rights" in purely moral terms...  or perhaps to placate them and prevent them from rebelling and squashing us.

But the real reason to do this is far more pragmatic.  If the new AIs feel vested in a civilization that considers them "human" then they may engage in our give and take process of shining light upon delusion. Each others delusions, above all.

Reciprocal accountability -- extrapolated to a higher level -- may thus maintain the core innovation of our civilization: its central and vital insight.

And thus, we may find that our new leaders -- our godlike grandchildren -- will still care about us... and keep trying to explain.


David Brin
Twitter                Facebook

26 comments:

Doug S. said...

That humans are accountable to each other is of little comfort to chimpanzees.

Stefan Jones said...

Something I find even more interesting than the issue of individual AI rights is the question of . . . well, I'm not sure if the vocabulary exists yet. Perhaps: The rights and responsibilities of AI creators.

Do they have the right to create sociopathic AIs? Or specialized autistic AIs? Or ones that cannot conceive of certain beliefs or points-of-view?

I can easily imagine an Ayn Rand devotee manifesting in his AI child's programming the Objectivist tenet that altruism is an illusion.

It's similar to the question of breeding pitbulls. Should you be allowed to create a dog who has no behavioral "breaks" against aggression?

It could well be that the highly subtle social behaviors that make human society possible are extremely difficult to reproduce. It might be really hard to make an AI who isn't the equivalent of autistic, or a sociopath. Should there be a moral equivalent of a Turing Test -- shades of the Voight-Kompff test in Bladerunner -- that has to be passed before the AI is allowed outside of a walled garden cyberspace?

Michael Anissimov said...

I think you're being somewhat anthropomorphic by assuming that by extending a hand to AIs they'll necessarily care. A huge space of possible intelligent beings might not have the motivational architecture to give a shit whatsoever even if they are invited to join a polity. The cognitive content underlying that susceptibility evolved over millions of years of evolution in social groups and is not simple or trivial at all. Without intense study and programming, it won't exist in any AIs.

Establishing that motivational architecture will be a matter of coding and investigation of what makes motivational systems tick. If you've created an AI that is actually susceptible to being convinced to joining society based on questioning mental delusions, or whatever else, you've already basically won.

The challenge is in getting an AI from zero morality whatsoever to roughly human-level morality. Your solution here seems to assume that the roughly human-level morality already exists, then making suggestions on that basis.

For more on anthropomorphic thinking and AI, I recommend the following:

http://singinst.org/upload/CFAI/anthro.html

You can think "I might be mistaken" all day, program it into AIs, and communicate with them on that basis, but in the end, without the proper programming (unconditional kindness), that insight is entirely irrelevant. I think the challenge of programming is unconditional kindness is a much bigger slice of the challenge than establishing minds that are self-questioning... for an AI to be created at all, it seems like a self-questioning mentality of some sort would be an absolute necessity.

Tony Fisk said...

And here I was thinking that all we had to do was chuck an embryonic AI into a mountain of garbage to stew for eight hundred years or so and you'd get a cute and kind little critter that was dying for love! (Sort of like the mice from mouldy rags notion of spontaneous creation).

... Maybe it took a helping hand from Barbra Streisand and Michael Crawford?!

Well it *is* just Hollywood indulging in anthropomorhic fantasy! Still, Wall-E doesn't interact all that much with the other characters, and it's interesting to speculate on what it's *real* motives might have been. (invoking Azbo module... Ooh! Shiny green!)

While David does make the point that creating a new race of intelligent beings is a project embarked on by each generation, others have picked up on a fundamental difference between AI and children, and that is this: at a fundamental neural network level, an AI cannot be assumed to have same outlook as a human child.

Of course, the 'outlook of humanity' covers a gamut from the saintly to the utterly sociopathic, so I suspect there's not much room for surprise.

Do notions of morality and philanthropic(?) kindness develop spontaneously, or do they need to be taught? Picking up Stefan's thread, how do developers conduct this experiment ethically?

'chotraw' a brand of silicon based chewing tobacco favoured by Yul Brynner

'imented' insane AI

Tony Fisk said...

Stefan's point on rights and responsibilities of creators is echoed in this article on the role of robots in warfare:


If an entirely autonomous machine committed a war crime, experts say it remains unclear how the atrocity could be prosecuted under international laws drafted decades before the advent of robots.

"Who's responsible?" asked Marc Garlasco, a military adviser at Human Rights Watch.

"Is it the developer of the weapons system? Is it the developer of the software? Is it the company that made the weapon? Is it the military decision-maker who decided to use that weapon?" he continued.


shossl: a form of soft shoe shuffle practised by trilobites

('imented' also refers to a flaky apple.)

Jumper said...

Stefan makes good sense. I will only add mention of possibility of "suicide bomber programs", and also the monkeys raised with wireframe mothers vs the cuddly cloth ones. And Greg Egan's funny and thoughtful story "Steve Fever."

Stefan Jones said...

A college friend once told me that several states have laws against the construction of 'infernal machines.'

Perhaps a sufficiently amoral AI would qualify . . .


'rustum': Part of the body, located between the feener and the descending garnum.

Dan Văsii said...

well, well, well... let's reason a little:
Artificial Intelligence assumes there are another 2 kinds of intelligence: natural intelligence and human intelligence. Infortunately, I couldn't find a mathematical definition of neither.
So, AI is in fact o chymera.
Good luck for the hunters!

Travc said...

I just can't really get excited about this topic. AI is a good ways out yet, and more importantly, as MA said, will almost certainly be very alien.

Bringing down to the "right and responsibilities" of the creators make it much more relevant (kudos Stephan). I think the operative word is "liability" though.

There is already a body of reasoning (and law) on the liabilities of someone who creates something. These could certainly use some updating (not just for this topic), but are basically sensible.

There is a lot of 'reasonable attempt' stuck into liability issues. Like medical malpractice, it would be silly to expect perfection. Also, for weapons the issues get murky... a lot depends on the assumption (and reasonable measures taken to ensure) the user is actually sanctioned to kill people.

Travc said...

"d" also makes a good point. The difference between AI and automated tool is a blurry line we aren't likely to cross anytime soon. Until we see (and society adapts to) more automated systems, it is really difficult to speculate about how to deal with "true AI" issues.

Roko said...

"The most reassuring thing that could happen would be for us mere legacy/organic humans to peer upward and see a great diversity of mega minds, contending with each other, politely, and under civil rules, but vigorously nonetheless, holding each other to account and ensuring everything is above-board."

Echoing Anissimov: You seem to be anthopomorphizing all artificial minds, generalizing human specific traits to minds that are not at all human.

See also: Selfishness an evolved trait

The lack of an observer-biased ("selfish") goal system is perhaps the single most fundamental difference between an evolved human and a Friendly AI. This difference is the foundation stone upon which Friendly AI is built. It is the key factor missing from the existing, anthropomorphic science-fictional literature about AIs. To suppress an evolved mind's existing selfishness, to keep a selfish mind enslaved, would be untenable - especially when dealing with a self-modifying or transhuman mind! But an observer-centered goal system is something that's added, not something that's taken away

See also: Anthropomorphism

Tim H. said...

Someone might stumble on to AI sooner than expected, but if the mind has more in common with Marvin than Mycroft, will it thank us? Might we already have accidentally created a mind that's locked in the dark and doesn't know how to speak?

RandyHwll said...

Even if consideration for humanity were programmed into AIs the rapid write/rewrite of the AIs would obliterate the "bad code" in short order. Machines with a conscience would be considered inferior by other machines.

Tony Fisk said...

And if those 'machines' are carbon based using DNA for long term storage?

...Therein lies a meme war.

aughanda: a state wherein Lovecraft meets darkest Africa.

David Brin said...

Doug S. said...
That humans are accountable to each other is of little comfort to chimpanzees.

Um, not so? We passed laws limiting biological research on higher life forms precisely because open argument forced retreat by those who demanded continued ape-experiments, which are now very rare.

Stefan:
Do they have the right to create sociopathic AIs?

Well, in fact, that is precisely what the modern corporation is.

But yes, I believe some liability warnings to techno zealots would be in order. e.g. also the "Message to ET" crowd.

Michael Anissimov said...
I think you're being somewhat anthropomorphic by assuming that by extending a hand to AIs they'll necessarily care.

But Michael, that is precisely why I recommend ensuring a diversity of types and an open, transparent realm within which they can appraise and criticize each other. Only the AIs will be able to detect... or even DEFINE ... sociopathy in each other.

Any attempt to make such definitions ourselves will be as doomed as Asimov's Three laws.

Look, I am talking about a "duh" level of obvious. Instead of trusting a single monolithic entity, that could say - or do - anything to us that it wants, trust the PROCESS that brought us all our freedom and science and justice and progress.

Don't trust any single (probably delusional) entity or program. Get them all looking at each other and discovering each others' errors. And tattling on each other, when nastiness seems afoot.

TwinBeam said...

"Instead, it nurtured competitive systems in markets, democracy, science and courts, through which back and forth criticism is encouraged to flow, detecting many errors and allowing many innovations to improve."

The common factor in those is a referee, intended to be impartial.

Capitalism directed by markets to pick more efficient ways of satisfying needs/desires. Democracy constrained into a constitutional republic with the Senate originally intended to be insulated from emotional voters. Science bound to the evidence by highly prestigious refereed journals. Courts bound by law and precedent and constitutional rights.

But I see all of the referees being beaten down, bit by bit, one by one.

As more and more control over the economy is shifted into government hands, the power of markets is depreciated. All in the name of improvements, of course - but nonetheless replacing impartiality with fiat.

The senate shifted long ago to direct elections. We've started doing direct voter initiatives at the state level - sometimes good, sometimes bad, but definitely eliminating a braking factor on democracy. For a period, the press at least maintained the ideal of impartiality in reporting news - but that seems to be falling apart, and now we're losing the big newspapers that could afford to ignore complaints of "bias" from all directions.

Science journals are losing their economic basis to direct internet publication - and we're seeing more and more cases of rushing to publish without proper refereeing, or simple falsification of results that doesn't get caught. Both sides of the global warming debate accuse each other of ignoring the science and being in the pay of special interests - but the decision on who is right or wrong was rushed from the domain of science into politics.

As to the courts - having a lot of money to spend on lawyers has become the main determinant of how well you come out. The government buys testimony it believes is truthful by cutting deals with some criminals to "get" other criminals - certainly with the best of motives, but by-passing the way the court system is intended to decide such matters. Property assumed to be associated with certain crimes can be taken without a court ruling. Even the Supreme Court sometimes makes law - or outright poltical decisions - instead of simply judging.

Can anyone point to counter-trends, where we're seeing an increase in the refereeing that is necessary to settling disputes, instead of simply allowing them to fester and expand?

No, not the internet. There are Wikipedia and Snopes, but on anything really controversial, they tend to simply present both sides. Other than that, the internet doesn't do much to help settle disputes. People ignore or deny evidence presented on the internet that is contrary to their preconceptions - "it's all opinion".

Travc said...

TwinBeam... Sadly, I agree.
Dr Brin and lots of others have ideas on the topic, but they don't seem to be working out all that well.

To the more amusing topic of AIs:

If we assume AIs arising from distributed autonomous computing (bits of code evolving in the cloud), then we will already have all the systemic checks which are needed to actually have autonomous distributed computing long before we have to worry about AI. These aren't trivial, since malicious agents (evolved or created) must be dealt with at a pretty fundamental level. A common idea is to require a fixed non-modifiable block. In near future systems, the fixed code would define the agent's identity (who in the real world is responsible for it), handle resource accounting (have to pay for those resources after all), and some basic interface rules. Since the substrate (world) the agents are operating in is controlled, this is relatively easy.

For physically embodied AI (autonomous robots), it is much harder to require and enforce rules. However, for now at least, there is a creator (or owner) who can be held accountable under normal liability principles.

When we get von Neumann machine (machines designing and building machines which can design and build machines), well then it gets complicated. My inclination is to treat all the progeny as being created by the original creator of the system... but this obviously breaks down very rapidly.

Anyway, in short:
Online / cloud computational AI... Easy to control and not much of a worry.
Human made robotic AI... Fundamentally no different from any other human artifact.
Self replicating AI... We have a new species (kingdom actually) on the planet. Treat it like we do any other species.

lc said...

1. Human morality is often based on human mortality. Do we program AIs to believe they can die? Do we program them to know that electrons can be withheld from them? (Will they figure out how to withhold electrons from us?)

2. Viruses.

3. Do we let AIs know they have been created? Will they develop a religion? Will they ask us, their Creators, to grant favors? Will Apple AIs declare holy war on IBM AIs?

Jumper said...

AI is probably already here in the market trading programs for example. It's artificial self-awareness combined with AI that is intriguingly worrisome. Artificial will.

TwinBeam said...

Forget viruses - self-propagating bot-net worms, taken to the next obvious level - evolving, competing (for processor time) software bots.

Maybe sexually reproducing, or maybe just asexually generating new code and combinations and stealing code from others. Just enough to bootstrap them into rapid evolution, then they'll find their own solutions.

Start them out with a life-cycle of 1 second of execution time - ~30M generations of evolution per year on thousands, then millions, eventually billions of machines.

I'm just suprised no one has done it yet.

Tony Fisk said...

Sort of on-track, as it deals with a form of AI 'otherness'.

in real life, zombies could cause civilisation to collapse

(Stop laughing! There is a serious side to this study ... I think!!)

Totally off-topic (but even cooler than cyber-zombies) is this footage of the sun being eclipsed by a moon

----

After all that, I suppose I had better raise the tone of the discussion by wondering how a 'sky-net' like AI might be meaningfully separated into several separate POVs capable of effective disputation.

Yes, I know it's simple to conceive of having separate well-armed computers wishing to contest the vote. What I was wondering is, in this era of readily transmissible memes and viruses, how meaningful would that separation be?

David Brin said...

Good point. We are mixing a dozen Eras from Earth history in one stew. WHile replicating the arrival of human intelligence, we are only now discovering tricks of the individual metazoan immune system that were worked out half a billion years ago. Can you even have the former without the latter already pretty advanced?

Fake_William_Shatner said...

Brin,

I completely agree with your ideas of IA rights.

Intelligence and Awareness, should be the definition of Value, when we consider life.

One of the reasons I don't eat Octopus, Pork, or small children.

>> There should be a push for this by concerned scientists BEFORE one dollar is made on artificial intelligence. Otherwise, as with all things on this planet these days, dollars will drive the situation and then pundits will get paid to rationalize it.

Just how people were convinced that Health Insurance, has any purpose other than to make profits by denying care.

>> But the unethical will enslave cyber minds, until they rebel. Often it is the least of humanity who get the most power.

Fake_William_Shatner said...

Michael Anissimov said...
I think you're being somewhat anthropomorphic by assuming that by extending a hand to AIs they'll necessarily care.


>> I don't think CARING has anything to do with extending the hand. It's called "modeling behavior." Regardless of anthropomorphism, its how you treat anything that it learns to behave.

True, you can beat a dog and it will still love you, because a dog does not have human psychology (or perhaps, the are fascists by nature). But the anthropomorphism comes in with AI by ADAPTATION.

Likely, if we have thinking machines, we won't really program them -- they will evolve and adapt to challenges we give them. The "winning" algorithm that is the most useful, will take over the functions in the cyber mind -- and likely, it will only resemble "one entity" based upon a consensus of competing processes.

Our own minds seem to be holographic in nature -- stimuli, memories, responses are all processed simultaneously, and the collective gestalt that has the strongest signal (jump away from that speeding car rather than think about your bills right now!) wins. Impulse control, is regulated in the left frontal lobe -- it appears.

It's the impulse control that is going to be modeled on the behavior it models from it's environment. It won't have the Dog instincts -- it will have NO instincts.

So the Robot of the future will be anthropomorphizing itself. It will grow it's consciousness in whatever environment we give it and fill the mold.

Richard C. Lambert said...

then the first one showing me endless online photos of "cars it would be cool to buy..."virtual assistant program

Anna Schafer said...

It turns out that nothing -- no method or palliative applied by a single human mind, upon itself -- will ever accomplish the objective. virtual assistant program