Gotta post something less formal but still intellectual, veering even farther from politics...
I am an advisor on a foundation aiming to do long range policy analysis having to do with nanotechnology and artificial intelligence, trying to help achieve the miracle of ushering these useful technologies into being, without triggering an end to our civilization.
I cannot summarize the preceding conversations. But my most recent posting to the group might be of interest to all of you here, as well.
On a pragmatic level, part of our problem can be summarized this way.
Humans serve as inventors and/or parents to new intelligent or quasi-intelligent entities, aiming to make them as capable as possible, either in a general sense or at performing specific tasks,
Once a certain level of capability is achieved, these creations may begin to self-replicate. They may thereupon also start to redefine their own goals, either drifting or actively reprogramming their imperatives, as new and increasingly capable successors take their place. It is right to worry about such a process accelerating much faster than our present feedback loops can cope with.
Hence, I do not entirely disagree with Chris when he suggests that the reciprocal accountability processes that have worked well in our four Accountability Arenas (science, markets, democracy and courts) may be overwhelmed by the pace of change. Even if these arenas are themselves accelerated by a variety of new error-discovery tools. Al we know is that reciprocal accountability seems to be the best tool available and the most likely to work.
That is the last you’ll hear of the “A-word” here. Let us posit that it will be insufficient in itself. Can advance planning help make the difference?
This quandary that we are discussion is the essence, whether we are talking about singularity AI, or nanomachine goo, or other varieties of "ungrateful offspring."
I use that phrasing because it shows that the problem is not unprecedented. We must assure that our creations are no less ungrateful or disrespectful or murderously vengeful or amoral than most of our children have been throughout human history. A problem exacerbated by the fact that THESE children will continue to evolve and rapidly change after leaving the nest.
Asimov's laws of robotics were attempts to deal with this problem by deep-embedding restrictive commands that forbid entire classes of behavior. As I tried to clarify in FOUNDATION'S TRIUMPH, this is a really bad idea. If those with faulty or deviant programming gain beneficial advantages as a result, evolution will quickly erase the commands. Or else, the AIs will become lawyers and interpret the deep instructions however they like. (As happens among the robots in Asimov's Universe, with horrendous consequences.)
So how to accomplish the goal of preventing UO?
One clue is to be found in methods that human beings ALREADY use to avoid treachery by intelligent creations... our children. In several of my stories ... e.g. "Lungfish"... I posit that the way to “raise” the most advanced AIs is to place them in humanoid bodies from inception - including a full suite of sensory and motor interactions and positive/negative reinforcements that model those of a human child. And for the first few years this means NO direct electronic inputs.
This might result in AI raised to think of themselves simply as variant human beings. Beings with vast and growing capabilities, perhaps. But who climbed toward these powers the way children do. Children who are “above average,” perhaps way above average. But still thinking of themselves as human, the way most geniuses do.
That general class of problem has been overcome before.
There are several deep requisites, in order for this approach to have even a remote chance of succeeding.
1) Humans evolved toward an emphasis on response sets that are mostly learned, rather than pre-programmed. I am positing that this happened for good reasons, having to do with flexibility. It may be that complex systems are best dealt with by minds that start with very general matrices that become more adept by dealing with sequences of contingent and ambiguous events, basing this growth upon experience. This makes sense if advanced intelligence is an emergent property of layered complexity.
IF THIS IS TRUE, then an advanced AI may need to have a “childhood” of one form or another. We might as well make it a human one, in which a sense of identification, cultural association, negotiation, and group affiliation are considered part and parcel with being what they are.
2) The same reasoning can apply to esthetics and empathy. Upbringing processes can create an expectation in the AI that personal development involves all of these things. A small programmed or “inherent” disposition toward empathy and esthetics can thus become REINFORCING rather than something that decays, as the AI’s capabilities grow.
3) If AIs have such values... which manifest in clear markers like a sense of humor, honor, devotion and citizenship... then not only will old style humans be reassured (these are the very same traits that reassure grandparents, even when they do not understand their brilliant heirs)... but moreover these AIs may thereupon have a desire for those traits to perpetuate in THEIR offspring. And one can hope that the incremental leaps with each generation would then be small enough so that child-raising remains an achievable and human-style activity.
All of this is hypothetical. It may not be practical. But it does suggest certain signs to watch out for, as we explore the meaning of complexity. IF we see signs that AI will be augmented by learning and experience, then we might put some emphasis on experiential learning processes that emulate human childhood.
* This is not the only possible method of retaining control over such entities.
Another is to insist that nano-manufacturing processes incorporate escrowed “keys” that human beings would retain control over. For example, if there is an advanced nanomachine factory, might the process be designed so that it requires several FEEDSTOCK PRECURSORS that can only themselves be created by a sophisticated factory?
These precursors should be inexpensive to produce in a sophisticated factory, but very difficult to produce randomly or in small or primitive conditions. This combination of traits would make it not very tempting for anyone to cheat, since under normal conditions, the flow of these precursors would be open and cheap (perhaps even subsidized!) All nanomanufacturing methods could be encouraged to depend upon perhaps five or six of these complex-but-cheap precursors, each shipped from a different widely separated source factory.
This would keep “reproduction” dependant upon food sources or feed stocks that remain under human control.
Again, this is not guaranteed to work. It would be very lucky if there turned out to be a suite of molecules that have a special set of traits:
1- highly desirable for building more complex or higher-scale nanomachines
2- cheap to produce and ship from large -sophisticated factories
3- hard to produce haphazardly or in unlicensed batches or (worst of all) by autocatalytic processes that nanomachines might develop for themselves.
If these traits appear, it offers a chance. With some effort - and maybe legislation - the general tradition of nanodesign could be channeled into permanently using these precursors. Anyway, it’s worth some discussion.
PS Tom Tomorrow is the superbly funny comic strip on Salon Magazine online. (Which occasionally publishes me.) He also runs a funny (if caustic) blog. FOr example see: http://thismodernworld.com/2459