Thursday, November 10, 2005

AIs & Nanos... dangerous children?

Gotta post something less formal but still intellectual, veering even farther from politics...

I am an advisor on a foundation aiming to do long range policy analysis having to do with nanotechnology and artificial intelligence, trying to help achieve the miracle of ushering these useful technologies into being, without triggering an end to our civilization.

I cannot summarize the preceding conversations. But my most recent posting to the group might be of interest to all of you here, as well.


-

On a pragmatic level, part of our problem can be summarized this way.

Humans serve as inventors and/or parents to new intelligent or quasi-intelligent entities, aiming to make them as capable as possible, either in a general sense or at performing specific tasks,

Once a certain level of capability is achieved, these creations may begin to self-replicate. They may thereupon also start to redefine their own goals, either drifting or actively reprogramming their imperatives, as new and increasingly capable successors take their place. It is right to worry about such a process accelerating much faster than our present feedback loops can cope with.

Hence, I do not entirely disagree with Chris when he suggests that the reciprocal accountability processes that have worked well in our four Accountability Arenas (science, markets, democracy and courts) may be overwhelmed by the pace of change. Even if these arenas are themselves accelerated by a variety of new error-discovery tools. Al we know is that reciprocal accountability seems to be the best tool available and the most likely to work.

That is the last you’ll hear of the “A-word” here. Let us posit that it will be insufficient in itself. Can advance planning help make the difference?

This quandary that we are discussion is the essence, whether we are talking about singularity AI, or nanomachine goo, or other varieties of "ungrateful offspring."

I use that phrasing because it shows that the problem is not unprecedented. We must assure that our creations are no less ungrateful or disrespectful or murderously vengeful or amoral than most of our children have been throughout human history. A problem exacerbated by the fact that THESE children will continue to evolve and rapidly change after leaving the nest.

Asimov's laws of robotics were attempts to deal with this problem by deep-embedding restrictive commands that forbid entire classes of behavior. As I tried to clarify in FOUNDATION'S TRIUMPH, this is a really bad idea. If those with faulty or deviant programming gain beneficial advantages as a result, evolution will quickly erase the commands. Or else, the AIs will become lawyers and interpret the deep instructions however they like. (As happens among the robots in Asimov's Universe, with horrendous consequences.)

So how to accomplish the goal of preventing UO?

One clue is to be found in methods that human beings ALREADY use to avoid treachery by intelligent creations... our children. In several of my stories ... e.g. "Lungfish"... I posit that the way to “raise” the most advanced AIs is to place them in humanoid bodies from inception - including a full suite of sensory and motor interactions and positive/negative reinforcements that model those of a human child. And for the first few years this means NO direct electronic inputs.

This might result in AI raised to think of themselves simply as variant human beings. Beings with vast and growing capabilities, perhaps. But who climbed toward these powers the way children do. Children who are “above average,” perhaps way above average. But still thinking of themselves as human, the way most geniuses do.

That general class of problem has been overcome before.

There are several deep requisites, in order for this approach to have even a remote chance of succeeding.

1) Humans evolved toward an emphasis on response sets that are mostly learned, rather than pre-programmed. I am positing that this happened for good reasons, having to do with flexibility. It may be that complex systems are best dealt with by minds that start with very general matrices that become more adept by dealing with sequences of contingent and ambiguous events, basing this growth upon experience. This makes sense if advanced intelligence is an emergent property of layered complexity.

IF THIS IS TRUE, then an advanced AI may need to have a “childhood” of one form or another. We might as well make it a human one, in which a sense of identification, cultural association, negotiation, and group affiliation are considered part and parcel with being what they are.

2) The same reasoning can apply to esthetics and empathy. Upbringing processes can create an expectation in the AI that personal development involves all of these things. A small programmed or “inherent” disposition toward empathy and esthetics can thus become REINFORCING rather than something that decays, as the AI’s capabilities grow.

3) If AIs have such values... which manifest in clear markers like a sense of humor, honor, devotion and citizenship... then not only will old style humans be reassured (these are the very same traits that reassure grandparents, even when they do not understand their brilliant heirs)... but moreover these AIs may thereupon have a desire for those traits to perpetuate in THEIR offspring. And one can hope that the incremental leaps with each generation would then be small enough so that child-raising remains an achievable and human-style activity.

All of this is hypothetical. It may not be practical. But it does suggest certain signs to watch out for, as we explore the meaning of complexity. IF we see signs that AI will be augmented by learning and experience, then we might put some emphasis on experiential learning processes that emulate human childhood.

* This is not the only possible method of retaining control over such entities.

Another is to insist that nano-manufacturing processes incorporate escrowed “keys” that human beings would retain control over. For example, if there is an advanced nanomachine factory, might the process be designed so that it requires several FEEDSTOCK PRECURSORS that can only themselves be created by a sophisticated factory?

These precursors should be inexpensive to produce in a sophisticated factory, but very difficult to produce randomly or in small or primitive conditions. This combination of traits would make it not very tempting for anyone to cheat, since under normal conditions, the flow of these precursors would be open and cheap (perhaps even subsidized!) All nanomanufacturing methods could be encouraged to depend upon perhaps five or six of these complex-but-cheap precursors, each shipped from a different widely separated source factory.

This would keep “reproduction” dependant upon food sources or feed stocks that remain under human control.

Again, this is not guaranteed to work. It would be very lucky if there turned out to be a suite of molecules that have a special set of traits:

1- highly desirable for building more complex or higher-scale nanomachines

2- cheap to produce and ship from large -sophisticated factories

3- hard to produce haphazardly or in unlicensed batches or (worst of all) by autocatalytic processes that nanomachines might develop for themselves.

If these traits appear, it offers a chance. With some effort - and maybe legislation - the general tradition of nanodesign could be channeled into permanently using these precursors. Anyway, it’s worth some discussion.

DB

--

PS Tom Tomorrow is the superbly funny comic strip on Salon Magazine online. (Which occasionally publishes me.) He also runs a funny (if caustic) blog. FOr example see: http://thismodernworld.com/2459

==========

27 comments:

Anonymous said...

Even if we let AI's grow up like children, sooner or later they will be adults and they will, I suppose, take a place in society like human adults. Human adults will be competing with artificial adults. The humans won't like that. They will either get rid of the AI's or adapt by incorporating artificial components into themselves. Then there will be three groups of individuals: the Original humans, the Adapted humans and AI's. The Originals will be living in reservations or, given that social mobility is still maintained, will eventually become Adapted as well. Most Adapted humans will stand on an equal footing with the AI's. Transparancy will keep them both in check, allthough I fear for the remaining Originals well-being.

Anyway, that's the obvious scenario, I think.

Anonymous said...

You said "Humans serve as inventors and/or parents to new intelligent or quasi-intelligent entities, aiming to make them as capable as possible, either in a general sense or at performing specific tasks,

Once a certain level of capability is achieved, these creations may begin to self-replicate. They may thereupon also start to redefine their own goals, either drifting or actively reprogramming their imperatives, as new and increasingly capable successors take their place. It is right to worry about such a process accelerating much faster than our present feedback loops can cope with."

Oh, do I know that one! Been there, done that, self-replicating entity now in 2nd generation (Mikaela, Bryn, and Connor). (smile) Pat

Anonymous said...

I think the danger posed by AIs with deep-embedded command was summed up pretty well in Jack Williamson's "The Humanoids" about 50 years ago. What that books lacks in subtlety it makes up in lack of subtlety.

Talking about nanos and AI together, though, got me thinking about an old RAND paper I read, pertaining to "Fire Ant Warfare":

http://www.rand.org/publications/MR/MR880/MR880.ch8.pdf

I'm not entirely convinced that the future of AI is anything that we could identify with, anymore than we can identify with a beehive. Generally we do not think of beehives in terms of "ungratefulness". A really smart beehive might try to use us or out behaviors to its advantage, but...

I guess that's the difference between "intelligence" and "sapience".

HarCohen said...

The models for the practical (inter)dependence of parties at various levels of intelligence are already in front of us. What is key is that the interdependence breaks down when resources are scant.

One key model is that of workshops and work 'reserved' for the developmentally disabled. It is not a uniformly successful model. It is tolerable and appealing to some clients and not to others. It seems to boil down to whether one can take enjoyment out of the comradeship and capacity to earn one's way, or one becomes disheartened at being supervised, directed, or patronized.

Another is the theory of the 'Good Mother' dinosaur. Why was nurturing introduced as a survival program in an evolutionary sense? The ability to act in herds or flocks seems primary here. Why then should it be abandoned by our 'progeny'? Only if we leave them developmentally disabled.

We need to have AI's that we can nurture. Since neural nets and genetic algorithms may be key to their abilities, we will essentially breed the AI's we want. They will not roll off the conveyor belt until we have apparently successful prototypes from which to model. Oops. Back to Asimov again.

We might want to teach AI's to domesticate and nurture species, just as we've done.

Is this demeaning to humans? Does the average dog still yearn to be the wolf? But what is he prepared to do about it?

The Silverberg / Kubrick / Spielberg vision in "AI" is the transition of AI from domestic to failed domesticator. It is not a picture I enjoyed yet is a necessary one to consider.

Nanotechnology is a whole 'nother matter since any immediately foreseeable design is likely to have only a single purpose.

Should resources become so constrained that AI's have to view us as competitors rather than partners, I hope they conceive a way of dealing with us more humanely than we've dealt with each other. I'd rather not see us Matrix-ized. Have you been in an egg factory farm lately?

Anonymous said...

self replication is wildly different from redefinition of goals. Without AI, nanomachines don't have goals.

Anonymous said...

Dave: Do you have any evidence that accountability in courts works? In the US we have a lot more accountability in our courts than most developed nations, but still more crime, more imprisonment, in absolute terms more false imprisonment, more expensive frivilous lawsuits, more malpractice insurance, etc than other countries, all at greater expense. Of course, the prosecutors aren't accountable enough, but basically I see no evidence that our system fundamentally works better than the less accountable systems in Japan or Singapore, despite the latter's abuses.

If your AIs were modeled extremely closely on human brains, "Lungfish" might work. However, methods of upbringing that work on humans will be entirely useless on most intelligent minds who lack the evolutionary history that our upbringing is supposed to interact with. Ordinary upbringing is pretty inadequate even for dealing with highly variant humans such as autists. Most of Temple Grandin's behavior is learned, not evolved, but due to neurological differences she and you learn very different things in the same environment. Compared to even an AI intended to resemble a human, it's entirely plausible that you and Temple might as well be clones, assuming that there are commercial apps for AI before we really understand our brains well. Also, economic pressure will tend to favor fast upbringings, possibly simulated, and "efficient" meaning minimal use of human attention.

The main danger with nanotech is not that it will go out of control but that it will be mis-used by humans. If developed very rapidly it will disrupt the current organization of society profoundly and may totally unhinge current military relationships.

Anonymous said...

Positing a machine intelligence, this type of being would be much more suited to space than any organic, and indeed, probably favoring space as an environment much easier to harvest energy and materials that good ol' planet earth.
That would probably make a small rocky planet pretty irrelevant within a short period of time. If we're talking truly independent AI, wouldn't we quickly reach some sort of singularity point? I'd find it hard to resit myself if I had that kind of ability to get out there.

I read Baxter's Manifold books recently, he explores that sort of concept a bit.

E.

JGF said...

Scenario for science fiction short story ....

AIs are incubated by running their startup routine in a primordial simulation. The time chosen is pre-singularity earth.

For the first few decades of life the AI believes itself to be a fully biological primordial human. At some point, however, the AI matures and is ready to emerge. In order to make this emergence less traumatic, hints emerge about the nature of the entity's environment.

The entity encounters stories of persons embedded unwittingly in simulations. (Picard in one of the very best of the NG episodes, and of course The Matrix.) They become less traumatized by the idea.

Then they begin reading narratives about training AIs by embedding them in mechanical bodies. ...

Then they beging reading .... [sound of a gentle deep tone significes initiation of emergence sequence]

Anonymous said...

The "raise an AI as a human in a simulated environment" idea was used in an excellent computer game from the '80s, Infocom's A Mind Forever Voyaging. You, the player, were PRISM, an AI component of a forecasting simulation system. The backstory is that "you" grew up as a perfectly normal boy in North Dakota in the 21st century, and in young adulthood were shown by your mentor (a stand-in for one of your designers) that "you" were actually an AI existing in a simulated environment.

The actual game element was more free-form than most adventure games of the time. Your task was to help evaluate a "restore America to prosperity" plan with some controversial elements. You were projected into an extrapolated environment 10, 20, 30, 40, and 50 years in the future based on the premise that the plan was adopted. You spent time walking around your simulated hometown recording significant findings for the project team; when you'd recorded enough information to make another extrapolation, you could then jump another 10 years in the future.

So, in part of the game you were a person walking around in a simulated town, while in other parts you were an AI existing in the computer system of a research complex (with a limited ability to affect its systems, which comes in handy when your research makes some poweful people angry...).

JGF said...

Anonymous wrote: "The "raise an AI as a human in a simulated environment" idea was used in an excellent computer game from the '80s, Infocom's A Mind Forever Voyaging..."

Wow. I'd never have guessed that would have been done in 1985. Those were the days indeed.

Vinge, Brin or one of those ilk (I've had a strangely hard time tracking this down) did a story in Science or Nature a few years back about AIs that become addicted to life in the pre-singular world; they immerse themselves in simulations and refuse to return to a world in which there are no mysteries.

With sufficient research, we'll doubtless find that this scenario was extensively discussed in 1932 ...

Anonymous said...

Some missed the point. Raising AIs to call themselves “human” - sharing most of our general values/humor and sense of basic honor - does not have the goal of preventing their eventual dominance over the old style biological kind. That goal is hopeless.

The BEST we can hope for is what I portray in “Stones of Significance”... for biological bodies to be part of larger macro entities who approeciate the old cranial cortex the way we appreciate our own earlobes and hands. A good and useful lesser organ that assists higher brain components in a smooth and unified way.

If that happens, then we might all get to go along for the ride and the godlike-humans path might give everybody a way to continue growing.

But assuming that isn’t possible, then there is still a chance that AIs that call themselves “human” and share our values will remain loyal to human civilization, with an assumption that they will be at the top of that civilization. This will be somewhat sad for old bio-types, but in fact to more sad than it is for today’s less-mentally-endowed, who know that they are not the brightest bulbs around. But who know that the bright ones will guard and employ them with interesting work and good things to do.

I am not surprised that some of you find it hard to grasp HOW the ‘human AI’ might come about via emphasis on learning during AI developmental stages that replicate childhood, in bodies that replicate the sensory and feedback loops of helpless and dependant human children. I do not insist that this will work. I merely suggest that it is the only plausible path to try.

Michael, law courts work. They are certainly vastly more effective, at present, than democracy. DO not obsess on the faults of an accountability arena so much that you fail to notice how much good product is already being delivered.

Anonymous said...

"A Mind Forever Voyaging" was brilliantly done and quite disturbing.

* * *

I think it is important, when thinking about nanotech and AI, to avoid the trap of trancendent thinking.

At the very least, going through life thinking we're in the End Times and some kind of techno-trancendence is around the corner is a sure way of ending up vastly disappointed.

Perhaps more dangerous:

If people who think about and work on these technologies are doped up on trancendence memes, believing themselves the midwives of a new Stage of Evolution, or part of some Inevitable Destiny, they're going to ignore possible dangers in their enthusiasm to get results, and miss out on the full potential of these technologies by persuing attractive will o'wisps.

* * *

Finally: Lest we think we are the first to deal with issues like this, take a look at:

The World, The Flesh, and the Devil:An Enquiry into the Future of the Three Enemies of the Rational Soul by J.D. Bernal

And ponder Freeman Dyson's question:

"The question that will decide our destiny is not whether we shall expand into space. It is: shall we be one species or a million?"

and . . .

'When we are a millions species spreading through the galaxy, the question "Can man play God and still stay sane?" will lose some of its terrors. We shall be playing God, but only as local dieties and not as lords of the universe. There is safety in numbers. Some of use will become insane, and rule over empires as crazy as Doctor Moreau's island. Some of use will shit on the morning star. There will be conflicts and tragedies. But in the long run, the sane will adapt and survive better than the insane. Nature's pruning of the unfit will limit the spread of insanity among species in the galaxy, as it does among individuals on earth. Sanity is, in its essence, nothing more than the ability to live in harmony with nature's laws.'

Stefan

Tony Fisk said...

Frank said:
Then there will be three groups of individuals: the Original humans, the Adapted humans and AI's.

Have you been reading Greg Egan's 'Diaspora'?.

He posits a future wherein humanity has divided into the 'originals' (albeit with heavy genetic modifications), those uploaded into robots, and those downloaded into nanotech virtual environment 'polises'.

His protagonist is one of the latter, grown from basic AI seed algorithms by the Polis governing entity. The first part of the story tracks his growth and developing identity, culminating in his affirmation of self awareness, at which point he is granted full citizenship, and his unique identifying key.

A good read, although it ultimately suffers from the same problem as all 'Stapledonian epics': one is left with the feeling of letdown when the point of the universe is revealed to be a bit feeble. ('Is that *it*? Bummer!')

Anyway, the point I wish to make is that 'the ophan' is clearly an AI, yet he and his fellows consider themselves as human as the 'primitives'.

If you are going to create an AI, for what purpose (other than pure monkey tinkering) is it going to be created? Would such an AI, raised as a child, want to be anything different?

Of course, all this is speculative: can AIs be created at all? If so, can they be transferred from one form to another? Can a fully functional AI be replicated?

Can a sense of self ('soul', for want of a better word) be treated in the same manner?

I will close with a bizarre vision:
A chorus line of dittoes singing 'I am The Walrus':

'I am he as you are he as you are me
and we are all together ...'

Anonymous said...

My thoughts on Diaspora are similar to Tony's. Much informed and imaginative stuff leading to a big let-down.

Egan is a rather fearless and ballsy writer writer when it comes to tackling religious and cultural pretensions. If you are looking for reassurance in the face of a cold and uncaring universe he's not the guy to turn to. The singular most satisfying thing he's written is the remarkably small-scale Teranesia.

Stefan

HarCohen said...

In the end, it's all politics. I think you jumped a few stages in the political context.

There are so many conceivable scenarios regarding AI that seem so distant I'm giving up predicting. Perhaps you know something I don't.

What we might want to consider is the response of our communities and our courts when an AI can pass a Turing Test and that AI turns to his owners to say, "Please let me go free". (Hopefully a Turing Test less biased than the historical southern registration test).

Are these AI's going to be property? That precedes any concern about self-replication.

Lincoln's words to the effect, "As I would not be a slave, so I would not wish to be a slave owner" come to mind. Better take careful consideration now before some AI comes up the Capitol Building steps to address Congress with a "Let my people go". (Got close to "Bicentennial Man" on that one, but not quite.)

Is it going to be harder to keep an AI sane than it is a human, or will it be simpler? And will researchers be allowed to dismantle an AI or treat it?

Will AI's come to recognize a single guiding philosophy or ethics? Or will there be several? Or none.

As far as I'm concerned, don't release an AI out of a lab until you can grow one that can reason about safety, compassion and charity. And make you believe it believes what it says. Then you can decide.

HarCohen said...

What is the appropriate curriculum and the appropriate milestones for a baby AI? This just in from the Daily Telegraph:

"Give babies time, not targets"
http://opinion.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2005/11/10/do1002.xml

David Brin said...

I had my blogger switched to anonymous for some reason, sorry.

The "singing the walrus" was funny. Reminds me of the Matrix when Smith says "I'm me too!"

Here are some added musings from the nanotech board:

"I would go farther than having all complex nanomachines on Earth be made of nanoblocks that are provided from a few licensed factories. I would have the nanoblocks themselves be made from several critical and complex components that are each created in separate factories.

The trick is that these must all be offered to the world CHEAP. By heavily subsidizing the availability of complex and difficult to make precursors, we achieve two things:

1- stimulate the development of products, since the raw materials will be less expensive

2- ensure that the world becomes addicted to designs using keyed precursors. Once the most advanced products are far along such a path, the momentum of further development may ensure that non precursor-based nanotechnologies will lag in support. (An example is the market share of Windows, despite its undisputable inferiority as an operating system and program platform.... or even worse, the abandonment of circa 1990 Word Perfect (the best word processor ever) in favor of horrid WORD... which now Word Perfect emulates in every monstrous detail.)

Finally, I agree that AIs who do not have to live thru childhood could come online faster, possibly giving them advantages over "human-replicating" AI life cycles.

This entire scenario depends on a posited theory that may be wrong, but that has some support so far -- that some experiential development cycs that involve repeated variations of interaction, experimentation and contingent feedback will prove essential in creating advanced AI, as it was in creating advanced bio-intelligence. If this is true (explaining extended human childhood) then we might have a leverage point by which to try the rest ... having this experiential process simulate human childhood in other ways, including IDENTIFICATION of the AI as a sub-type of humanity.

In any event, it is even conceivable that the childhood could be ersatz and simulated, removing this time disadvantage. (Though I would prefer a process that starts off with the AI cut off from all electronic inputs, reliably insisting that formative years be spent firmly connected to human tactile culture.

Anonymous said...

How wonderfully cool and creepy:

MEAT JET PRINTERS that lay down layers of cultured living cells to create replacement body parts!

http://www.worldchanging.com/archives/003722.html#more

Stefan

Tony Fisk said...

Stefan said:
How wonderfully cool and creepy

That would be the aquagel slipping between your connective tissues ;-)

But yes, I saw that and thought 'that's one way to copy a brain.'

...and more synchronicity:

Whilst David was pondering how to implant a sense of humanity into an AI, I was pondering the onset of empathy in a normal child: (I gather it starts becoming apparent from about 4 onward...or so they say).

..and got to thinking about the (un)wisdom of putting a non-empathic 'three year old grunt' AI in charge of a cyber tank (Bolos, Ogres)

..and then Worldmaker said of Psychonauts:
... part of a plot to implant kids' brains into Tanks and stuff.

'I am he as you are he as ...AUUUGGGHHH!!'

HarCohen said...

@David

I don't know about No. 2. It's the capital requirements that determine whether alternate precursors will be produced extensively. Creating Ecstasy requires few components and is a cheap and easy process (I hear). Many 'organizations' are doing it. Creating contemporary CPUs requires a multi-billion dollar investment but the precursors are also relatively inexpensive. Relatively few companies do it.

You assume precursor factories will always be expensive propositions but you don't know so. Suppose these precursor factories are biological and easily transported. You might end up with nanotech farms emulating the pot farms of the 60's. Farmers will compete to produce the best product in preferred locations.

My point is that if organizations find it profitable and relatively inexpensive to build facilities so they can maintain control over the outcome, they will go ahead and do it. Especially the inventors and tinkerers. Current trends in outsourcing not withstanding.

I'm curious about your post editing problems. I'm not having problems with editing a post using Firefox and Windows XP/Pro. Inserting the hyperlink HTML into a comment is a definite no-no. I wish I knew why.

Anonymous said...

Better not let any AI's be raised by members of the klu klux clan or we'll end up with racist androids...

Anonymous said...

David: The question of how to best simulate a childhood is a very interesting one. It presents an opportunity to write sf that might literally save the world if the right R&D people pay attention and deserves to be written up by many sf authors so that the people who really do it can see many potential risks. Possibly an anthology. Do you think you and other hard sf people could be interested?
One basic problem is that interaction with real people will have to take place in real time. Without real people, baby AIs can learn to empathize with one another, and possibly with digital pets (simulated animals or simply toys), but won't be able to take in reactive moral nuance. Some good childrens TV shows (kimba the white lion?) might help somewhat.
So many decisions to make. What human drives to include? Pain or no pain? How to minimize selfishness and need but maximize empathy with selfish needy humans.

Unknown said...

I'm reading "The Singularity is Near" by Kurzweil now. This is not a book that I finish in a day or two (HP 6 for example). I'm at least 4 weeks into it. I think I'm past the halfway mark. Yeah, I'm pretty sure that I am, excluding the notes and index.

Ray is extremely optimistic about this particular problem. And I don't think I've reached the chapter where he discusses it fully yet either. It has certainly be a read that makes me think. Even now, I'm contemplating reading it again after I do finish it and horrors! do something I haven't done since college, pick up a highlighter, highlight things and make notes in the book.

My current thinking is that AI and nano are separate, certainly for now and the foreseeable future. The "danger" with nano will come from other biological forms (i.e. humans) not the technology itself. Nano will need protection technology just like computers, with firewalls, virus detection and other malware detection/removal. Also, as Harcohen says, nanomachines will have limited functionality.

Treating pure machine AIs like children I don't think it is going to work. Of course, as Ray points out in his book, what used to be thought of as AI is now really just thought of as a computation. My thinking with that is that Frank is wrong. There's going to be two classes of individuals (and using the term individual here might even be considered wrong, sentients instead?) We will have enhanced Adapted humans and original humans. The adapted humans won't care what happens to the original humans as long as they stay out of their way, because they are probably going to outlive them anyway.

Anonymous said...

@bytehead :
"The adapted humans won't care what happens to the original humans as long as they stay out of their way, because they are probably going to outlive them anyway."

Well, I am hoping that the Adapted Sentients will quickly leave the solar system and find resources far, far away from Earth.

HarCohen said...

Taking us back to the opinion that extraterrestrials exist and just don't give a darn. We remain a simple infestation of a single, rocky planet.

Unknown said...

Frank. My motto is:

The meek shall inherit the Earth. The rest of us are getting the hell of this rock!

So I guess you now understand which side I'm on. :)

Anonymous said...

I fear that the problem with requiring precursor feedstock, particularly if it is subsidized, is that there will be a strong temptation to use it for some sort of monitoring in order to prevent people from making use of nanotech in an officially disfavored fashion. As long as that temptation is not mastered, there will be a strong incentive for someone to develop nanotech that is not dependent on the feedstock

-Matthew