tag:blogger.com,1999:blog-8587336.post113161376067268412..comments2024-03-28T06:22:23.961-07:00Comments on CONTRARY BRIN: AIs & Nanos... dangerous children?David Brinhttp://www.blogger.com/profile/14465315130418506525noreply@blogger.comBlogger27125tag:blogger.com,1999:blog-8587336.post-1132360512016867152005-11-18T16:35:00.000-08:002005-11-18T16:35:00.000-08:00I fear that the problem with requiring precursor f...I fear that the problem with requiring precursor feedstock, particularly if it is subsidized, is that there will be a strong temptation to use it for some sort of monitoring in order to prevent people from making use of nanotech in an officially disfavored fashion. As long as that temptation is not mastered, there will be a strong incentive for someone to develop nanotech that is not dependent on the feedstock<BR/><BR/>-MatthewAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1132172804333569262005-11-16T12:26:00.000-08:002005-11-16T12:26:00.000-08:00Frank. My motto is:The meek shall inherit the Ear...Frank. My motto is:<BR/><BR/>The meek shall inherit the Earth. The rest of us are getting the hell of this rock!<BR/><BR/>So I guess you now understand which side I'm on. :)Anonymoushttps://www.blogger.com/profile/03787573572076678994noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131818178375128142005-11-12T09:56:00.000-08:002005-11-12T09:56:00.000-08:00Taking us back to the opinion that extraterrestria...Taking us back to the opinion that extraterrestrials exist and just don't give a darn. We remain a simple infestation of a single, rocky planet.HarCohenhttps://www.blogger.com/profile/09461182873868141978noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131814861073714912005-11-12T09:01:00.000-08:002005-11-12T09:01:00.000-08:00@bytehead :"The adapted humans won't care what hap...@bytehead :<BR/>"The adapted humans won't care what happens to the original humans as long as they stay out of their way, because they are probably going to outlive them anyway."<BR/><BR/>Well, I am hoping that the Adapted Sentients will quickly leave the solar system and find resources far, far away from Earth.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131761337783741942005-11-11T18:08:00.000-08:002005-11-11T18:08:00.000-08:00I'm reading "The Singularity is Near" by Kurzweil ...I'm reading "The Singularity is Near" by Kurzweil now. This is not a book that I finish in a day or two (HP 6 for example). I'm at least 4 weeks into it. I think I'm past the halfway mark. Yeah, I'm pretty sure that I am, excluding the notes and index.<BR/><BR/>Ray is extremely optimistic about this particular problem. And I don't think I've reached the chapter where he discusses it fully yet either. It has certainly be a read that makes me think. Even now, I'm contemplating reading it again after I do finish it and horrors! do something I haven't done since college, pick up a highlighter, highlight things and make notes in the book.<BR/><BR/>My current thinking is that AI and nano are separate, certainly for now and the foreseeable future. The "danger" with nano will come from other biological forms (i.e. humans) not the technology itself. Nano will need protection technology just like computers, with firewalls, virus detection and other malware detection/removal. Also, as Harcohen says, nanomachines will have limited functionality.<BR/><BR/>Treating pure machine AIs like children I don't think it is going to work. Of course, as Ray points out in his book, what used to be thought of as AI is now really just thought of as a computation. My thinking with that is that Frank is wrong. There's going to be two classes of individuals (and using the term individual here might even be considered wrong, sentients instead?) We will have enhanced Adapted humans and original humans. The adapted humans won't care what happens to the original humans as long as they stay out of their way, because they are probably going to outlive them anyway.Anonymoushttps://www.blogger.com/profile/03787573572076678994noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131726798737469182005-11-11T08:33:00.000-08:002005-11-11T08:33:00.000-08:00David: The question of how to best simulate a chi...David: The question of how to best simulate a childhood is a very interesting one. It presents an opportunity to write sf that might literally save the world if the right R&D people pay attention and deserves to be written up by many sf authors so that the people who really do it can see many potential risks. Possibly an anthology. Do you think you and other hard sf people could be interested? <BR/>One basic problem is that interaction with real people will have to take place in real time. Without real people, baby AIs can learn to empathize with one another, and possibly with digital pets (simulated animals or simply toys), but won't be able to take in reactive moral nuance. Some good childrens TV shows (kimba the white lion?) might help somewhat. <BR/>So many decisions to make. What human drives to include? Pain or no pain? How to minimize selfishness and need but maximize empathy with selfish needy humans.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131720743147852232005-11-11T06:52:00.000-08:002005-11-11T06:52:00.000-08:00Better not let any AI's be raised by members of th...Better not let any AI's be raised by members of the klu klux clan or we'll end up with racist androids...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131687107948401172005-11-10T21:31:00.000-08:002005-11-10T21:31:00.000-08:00@DavidI don't know about No. 2. It's the capital r...@David<BR/><BR/>I don't know about No. 2. It's the capital requirements that determine whether alternate precursors will be produced extensively. Creating Ecstasy requires few components and is a cheap and easy process (I hear). Many 'organizations' are doing it. Creating contemporary CPUs requires a multi-billion dollar investment but the precursors are also relatively inexpensive. Relatively few companies do it. <BR/><BR/>You assume precursor factories will always be expensive propositions but you don't know so. Suppose these precursor factories are biological and easily transported. You might end up with nanotech farms emulating the pot farms of the 60's. Farmers will compete to produce the best product in preferred locations. <BR/><BR/>My point is that if organizations find it profitable and relatively inexpensive to build facilities so they can maintain control over the outcome, they will go ahead and do it. Especially the inventors and tinkerers. Current trends in outsourcing not withstanding. <BR/><BR/>I'm curious about your post editing problems. I'm not having problems with editing a post using Firefox and Windows XP/Pro. Inserting the hyperlink HTML into a comment is a definite no-no. I wish I knew why.HarCohenhttps://www.blogger.com/profile/09461182873868141978noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131685297640494032005-11-10T21:01:00.000-08:002005-11-10T21:01:00.000-08:00Stefan said:How wonderfully cool and creepyThat wo...Stefan said:<BR/><I>How wonderfully cool and creepy</I><BR/><BR/>That would be the aquagel slipping between your connective tissues ;-)<BR/><BR/>But yes, I saw that and thought 'that's one way to copy a brain.'<BR/><BR/>...and more synchronicity:<BR/><BR/>Whilst David was pondering how to implant a sense of humanity into an AI, I was pondering the onset of empathy in a normal child: (I gather it starts becoming apparent from about 4 onward...or so they say). <BR/><BR/>..and got to thinking about the (un)wisdom of putting a non-empathic 'three year old grunt' AI in charge of a cyber tank (Bolos, Ogres)<BR/><BR/>..and then Worldmaker said of Psychonauts:<BR/><I>... part of a plot to implant kids' brains into Tanks and stuff.</I><BR/><BR/>'I am he as you are he as ...AUUUGGGHHH!!'Tony Fiskhttps://www.blogger.com/profile/14578160528746657971noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131680072681757132005-11-10T19:34:00.000-08:002005-11-10T19:34:00.000-08:00How wonderfully cool and creepy:MEAT JET PRINTERS ...How wonderfully cool and creepy:<BR/><BR/>MEAT JET PRINTERS that lay down layers of cultured living cells to create replacement body parts!<BR/><BR/>http://www.worldchanging.com/archives/003722.html#more<BR/><BR/>StefanAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131673801349715112005-11-10T17:50:00.000-08:002005-11-10T17:50:00.000-08:00I had my blogger switched to anonymous for some re...I had my blogger switched to anonymous for some reason, sorry.<BR/><BR/>The "singing the walrus" was funny. Reminds me of the Matrix when Smith says "I'm me too!"<BR/><BR/>Here are some added musings from the nanotech board:<BR/><BR/>"I would go farther than having all complex nanomachines on Earth be made of nanoblocks that are provided from a few licensed factories. I would have the nanoblocks themselves be made from several critical and complex components that are each created in separate factories. <BR/><BR/>The trick is that these must all be offered to the world CHEAP. By heavily subsidizing the availability of complex and difficult to make precursors, we achieve two things:<BR/><BR/>1- stimulate the development of products, since the raw materials will be less expensive<BR/><BR/>2- ensure that the world becomes addicted to designs using keyed precursors. Once the most advanced products are far along such a path, the momentum of further development may ensure that non precursor-based nanotechnologies will lag in support. (An example is the market share of Windows, despite its undisputable inferiority as an operating system and program platform.... or even worse, the abandonment of circa 1990 Word Perfect (the best word processor ever) in favor of horrid WORD... which now Word Perfect emulates in every monstrous detail.)<BR/><BR/>Finally, I agree that AIs who do not have to live thru childhood could come online faster, possibly giving them advantages over "human-replicating" AI life cycles.<BR/> <BR/> This entire scenario depends on a posited theory that may be wrong, but that has some support so far -- that some experiential development cycs that involve repeated variations of interaction, experimentation and contingent feedback will prove essential in creating advanced AI, as it was in creating advanced bio-intelligence. If this is true (explaining extended human childhood) then we might have a leverage point by which to try the rest ... having this experiential process simulate human childhood in other ways, including IDENTIFICATION of the AI as a sub-type of humanity.<BR/><BR/> In any event, it is even conceivable that the childhood could be ersatz and simulated, removing this time disadvantage. (Though I would prefer a process that starts off with the AI cut off from all electronic inputs, reliably insisting that formative years be spent firmly connected to human tactile culture.David Brinhttps://www.blogger.com/profile/14465315130418506525noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131673231518909642005-11-10T17:40:00.000-08:002005-11-10T17:40:00.000-08:00What is the appropriate curriculum and the appropr...What is the appropriate curriculum and the appropriate milestones for a baby AI? This just in from the Daily Telegraph:<BR/><BR/> "Give babies time, not targets"<BR/>http://opinion.telegraph.co.uk/opinion/main.jhtml?xml=/opinion/2005/11/10/do1002.xmlHarCohenhttps://www.blogger.com/profile/09461182873868141978noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131672514655114312005-11-10T17:28:00.000-08:002005-11-10T17:28:00.000-08:00In the end, it's all politics. I think you jumped ...In the end, it's all politics. I think you jumped a few stages in the political context.<BR/><BR/>There are so many conceivable scenarios regarding AI that seem so distant I'm giving up predicting. Perhaps you know something I don't.<BR/><BR/>What we might want to consider is the response of our communities and our courts when an AI can pass a Turing Test and that AI turns to his owners to say, "Please let me go free". (Hopefully a Turing Test less biased than the historical southern registration test).<BR/><BR/>Are these AI's going to be property? That precedes any concern about self-replication.<BR/><BR/>Lincoln's words to the effect, "As I would not be a slave, so I would not wish to be a slave owner" come to mind. Better take careful consideration now before some AI comes up the Capitol Building steps to address Congress with a "Let my people go". (Got close to "Bicentennial Man" on that one, but not quite.)<BR/><BR/>Is it going to be harder to keep an AI sane than it is a human, or will it be simpler? And will researchers be allowed to dismantle an AI or treat it?<BR/><BR/>Will AI's come to recognize a single guiding philosophy or ethics? Or will there be several? Or none. <BR/><BR/>As far as I'm concerned, don't release an AI out of a lab until you can grow one that can reason about safety, compassion and charity. And make you believe it <B>believes</B> what it says. Then you can decide.HarCohenhttps://www.blogger.com/profile/09461182873868141978noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131670806314483762005-11-10T17:00:00.000-08:002005-11-10T17:00:00.000-08:00My thoughts on Diaspora are similar to Tony's. Muc...My thoughts on <I>Diaspora</I> are similar to Tony's. Much informed and imaginative stuff leading to a big let-down.<BR/><BR/>Egan is a rather fearless and ballsy writer writer when it comes to tackling religious and cultural pretensions. If you are looking for reassurance in the face of a cold and uncaring universe he's not the guy to turn to. The singular most satisfying thing he's written is the remarkably small-scale <I>Teranesia</I>.<BR/><BR/>StefanAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131669880519546572005-11-10T16:44:00.000-08:002005-11-10T16:44:00.000-08:00Frank said:Then there will be three groups of indi...Frank said:<BR/><I>Then there will be three groups of individuals: the Original humans, the Adapted humans and AI's.</I><BR/><BR/>Have you been reading Greg Egan's 'Diaspora'?.<BR/><BR/>He posits a future wherein humanity has divided into the 'originals' (albeit with heavy genetic modifications), those uploaded into robots, and those downloaded into nanotech virtual environment 'polises'.<BR/><BR/>His protagonist is one of the latter, grown from basic AI seed algorithms by the Polis governing entity. The first part of the story tracks his growth and developing identity, culminating in his affirmation of self awareness, at which point he is granted full citizenship, and his unique identifying key.<BR/><BR/>A good read, although it ultimately suffers from the same problem as all 'Stapledonian epics': one is left with the feeling of letdown when the point of the universe is revealed to be a bit feeble. ('Is that *it*? Bummer!')<BR/><BR/>Anyway, the point I wish to make is that 'the ophan' is clearly an AI, yet he and his fellows consider themselves as human as the 'primitives'.<BR/><BR/>If you are going to create an AI, for what purpose (other than pure monkey tinkering) is it going to be created? Would such an AI, raised as a child, want to be anything different?<BR/><BR/>Of course, all this is speculative: can AIs be created at all? If so, can they be transferred from one form to another? Can a fully functional AI be replicated? <BR/><BR/>Can a sense of self ('soul', for want of a better word) be treated in the same manner?<BR/><BR/>I will close with a bizarre vision: <BR/>A chorus line of dittoes singing 'I am The Walrus':<BR/><BR/>'I am he as you are he as you are me<BR/>and we are all together ...'Tony Fiskhttps://www.blogger.com/profile/14578160528746657971noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131658887408767932005-11-10T13:41:00.000-08:002005-11-10T13:41:00.000-08:00"A Mind Forever Voyaging" was brilliantly done and..."A Mind Forever Voyaging" was brilliantly done and quite disturbing. <BR/><BR/>* * *<BR/><BR/>I think it is important, when thinking about nanotech and AI, to avoid the trap of <I>trancendent thinking.</I><BR/><BR/>At the very least, going through life thinking we're in the End Times and some kind of techno-trancendence is around the corner is a sure way of ending up vastly disappointed.<BR/><BR/>Perhaps more dangerous:<BR/><BR/>If people who think about and work on these technologies are doped up on trancendence memes, believing themselves the midwives of a new Stage of Evolution, or part of some Inevitable Destiny, they're going to <I>ignore possible dangers</I> in their enthusiasm to get results, and <I>miss out on the full potential</I> of these technologies by persuing attractive will o'wisps.<BR/><BR/>* * *<BR/><BR/>Finally: Lest we think we are the first to deal with issues like this, take a look at:<BR/><BR/><A HREF="http://cscs.umich.edu/~crshalizi/Bernal/" REL="nofollow"><I>The World, The Flesh, and the Devil:</I>An Enquiry into the Future of the Three Enemies of the Rational Soul</A> by J.D. Bernal<BR/><BR/>And ponder Freeman Dyson's question:<BR/><BR/><I>"The question that will decide our destiny is not whether we shall expand into space. It is: shall we be one species or a million?"</I><BR/><BR/>and . . .<BR/><BR/><I>'When we are a millions species spreading through the galaxy, the question "Can man play God and still stay sane?" will lose some of its terrors. We shall be playing God, but only as local dieties and not as lords of the universe. There is safety in numbers. Some of use will become insane, and rule over empires as crazy as Doctor Moreau's island. Some of use will shit on the morning star. There will be conflicts and tragedies. But in the long run, the sane will adapt and survive better than the insane. Nature's pruning of the unfit will limit the spread of insanity among species in the galaxy, as it does among individuals on earth. Sanity is, in its essence, nothing more than the ability to live in harmony with nature's laws.'</I><BR/><BR/>StefanAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131658305503954302005-11-10T13:31:00.000-08:002005-11-10T13:31:00.000-08:00Some missed the point. Raising AIs to call themse...Some missed the point. Raising AIs to call themselves “human” - sharing most of our general values/humor and sense of basic honor - does not have the goal of preventing their eventual dominance over the old style biological kind. That goal is hopeless. <BR/><BR/>The BEST we can hope for is what I portray in “Stones of Significance”... for biological bodies to be part of larger macro entities who approeciate the old cranial cortex the way we appreciate our own earlobes and hands. A good and useful lesser organ that assists higher brain components in a smooth and unified way.<BR/><BR/>If that happens, then we might all get to go along for the ride and the godlike-humans path might give everybody a way to continue growing.<BR/><BR/>But assuming that isn’t possible, then there is still a chance that AIs that call themselves “human” and share our values will remain loyal to human civilization, with an assumption that they will be at the top of that civilization. This will be somewhat sad for old bio-types, but in fact to more sad than it is for today’s less-mentally-endowed, who know that they are not the brightest bulbs around. But who know that the bright ones will guard and employ them with interesting work and good things to do.<BR/><BR/>I am not surprised that some of you find it hard to grasp HOW the ‘human AI’ might come about via emphasis on learning during AI developmental stages that replicate childhood, in bodies that replicate the sensory and feedback loops of helpless and dependant human children. I do not insist that this will work. I merely suggest that it is the only plausible path to try.<BR/><BR/>Michael, law courts work. They are certainly vastly more effective, at present, than democracy. DO not obsess on the faults of an accountability arena so much that you fail to notice how much good product is already being delivered.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131655056992096982005-11-10T12:37:00.000-08:002005-11-10T12:37:00.000-08:00Anonymous wrote: "The "raise an AI as a human in a...Anonymous wrote: "The "raise an AI as a human in a simulated environment" idea was used in an excellent computer game from the '80s, Infocom's <A HREF="http://www.the-underdogs.org/game.php?id=14" REL="nofollow">A Mind Forever Voyaging</A>..."<BR/><BR/>Wow. I'd never have guessed that would have been done in 1985. Those were the days indeed.<BR/><BR/>Vinge, Brin or one of those ilk (I've had a strangely hard time tracking this down) did a story in Science or Nature a few years back about AIs that become addicted to life in the pre-singular world; they immerse themselves in simulations and refuse to return to a world in which there are no mysteries.<BR/><BR/>With sufficient research, we'll doubtless find that this scenario was extensively discussed in 1932 ...JGFhttps://www.blogger.com/profile/14580785981874040314noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131650573610683342005-11-10T11:22:00.000-08:002005-11-10T11:22:00.000-08:00The "raise an AI as a human in a simulated environ...The "raise an AI as a human in a simulated environment" idea was used in an excellent computer game from the '80s, Infocom's <I>A Mind Forever Voyaging</I>. You, the player, were PRISM, an AI component of a forecasting simulation system. The backstory is that "you" grew up as a perfectly normal boy in North Dakota in the 21st century, and in young adulthood were shown by your mentor (a stand-in for one of your designers) that "you" were actually an AI existing in a simulated environment.<BR/><BR/>The actual game element was more free-form than most adventure games of the time. Your task was to help evaluate a "restore America to prosperity" plan with some controversial elements. You were projected into an extrapolated environment 10, 20, 30, 40, and 50 years in the future based on the premise that the plan was adopted. You spent time walking around your simulated hometown recording significant findings for the project team; when you'd recorded enough information to make another extrapolation, you could then jump another 10 years in the future.<BR/><BR/>So, in part of the game you were a person walking around in a simulated town, while in other parts you were an AI existing in the computer system of a research complex (with a limited ability to affect its systems, which comes in handy when your research makes some poweful people angry...).Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131645162146882442005-11-10T09:52:00.000-08:002005-11-10T09:52:00.000-08:00Scenario for science fiction short story ....AIs a...Scenario for science fiction short story ....<BR/><BR/>AIs are incubated by running their startup routine in a primordial simulation. The time chosen is pre-singularity earth.<BR/><BR/>For the first few decades of life the AI believes itself to be a fully biological primordial human. At some point, however, the AI matures and is ready to emerge. In order to make this emergence less traumatic, hints emerge about the nature of the entity's environment. <BR/><BR/>The entity encounters stories of persons embedded unwittingly in simulations. (Picard in one of the very best of the NG episodes, and of course The Matrix.) They become less traumatized by the idea.<BR/><BR/>Then they begin reading narratives about training AIs by embedding them in mechanical bodies. ...<BR/><BR/>Then they beging reading .... [sound of a gentle deep tone significes initiation of emergence sequence]JGFhttps://www.blogger.com/profile/14580785981874040314noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131642390575773002005-11-10T09:06:00.000-08:002005-11-10T09:06:00.000-08:00Positing a machine intelligence, this type of bein...Positing a machine intelligence, this type of being would be much more suited to space than any organic, and indeed, probably favoring space as an environment much easier to harvest energy and materials that good ol' planet earth.<BR/>That would probably make a small rocky planet pretty irrelevant within a short period of time. If we're talking truly independent AI, wouldn't we quickly reach some sort of singularity point? I'd find it hard to resit myself if I had that kind of ability to get out there.<BR/><BR/>I read Baxter's Manifold books recently, he explores that sort of concept a bit.<BR/><BR/>E.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131641441028309852005-11-10T08:50:00.000-08:002005-11-10T08:50:00.000-08:00Dave: Do you have any evidence that accountabilit...Dave: Do you have any evidence that accountability in courts works? In the US we have a lot more accountability in our courts than most developed nations, but still more crime, more imprisonment, in absolute terms more false imprisonment, more expensive frivilous lawsuits, more malpractice insurance, etc than other countries, all at greater expense. Of course, the prosecutors aren't accountable enough, but basically I see no evidence that our system fundamentally works better than the less accountable systems in Japan or Singapore, despite the latter's abuses.<BR/><BR/>If your AIs were modeled extremely closely on human brains, "Lungfish" might work. However, methods of upbringing that work on humans will be entirely useless on most intelligent minds who lack the evolutionary history that our upbringing is supposed to interact with. Ordinary upbringing is pretty inadequate even for dealing with highly variant humans such as autists. Most of Temple Grandin's behavior is learned, not evolved, but due to neurological differences she and you learn very different things in the same environment. Compared to even an AI intended to resemble a human, it's entirely plausible that you and Temple might as well be clones, assuming that there are commercial apps for AI before we really understand our brains well. Also, economic pressure will tend to favor fast upbringings, possibly simulated, and "efficient" meaning minimal use of human attention. <BR/><BR/>The main danger with nanotech is not that it will go out of control but that it will be mis-used by humans. If developed very rapidly it will disrupt the current organization of society profoundly and may totally unhinge current military relationships.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131639855394373222005-11-10T08:24:00.000-08:002005-11-10T08:24:00.000-08:00self replication is wildly different from redefini...self replication is wildly different from redefinition of goals. Without AI, nanomachines don't have goals.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131639653714639932005-11-10T08:20:00.000-08:002005-11-10T08:20:00.000-08:00The models for the practical (inter)dependence of ...The models for the practical (inter)dependence of parties at various levels of intelligence are already in front of us. What is key is that the interdependence breaks down when resources are scant.<BR/><BR/>One key model is that of workshops and work 'reserved' for the developmentally disabled. It is not a uniformly successful model. It is tolerable and appealing to some clients and not to others. It seems to boil down to whether one can take enjoyment out of the comradeship and capacity to earn one's way, or one becomes disheartened at being supervised, directed, or patronized.<BR/><BR/>Another is the theory of the 'Good Mother' dinosaur. Why was nurturing introduced as a survival program in an evolutionary sense? The ability to act in herds or flocks seems primary here. Why then should it be abandoned by our 'progeny'? Only if we leave them developmentally disabled.<BR/><BR/>We need to have AI's that we can nurture. Since neural nets and genetic algorithms may be key to their abilities, we will essentially breed the AI's we want. They will not roll off the conveyor belt until we have apparently successful prototypes from which to model. Oops. Back to Asimov again.<BR/><BR/>We might want to teach AI's to domesticate and nurture species, just as we've done. <BR/><BR/>Is this demeaning to humans? Does the average dog still yearn to be the wolf? But what is he prepared to do about it? <BR/><BR/>The Silverberg / Kubrick / Spielberg vision in "AI" is the transition of AI from domestic to failed domesticator. It is not a picture I enjoyed yet is a necessary one to consider.<BR/><BR/>Nanotechnology is a whole 'nother matter since any immediately foreseeable design is likely to have only a single purpose. <BR/><BR/>Should resources become so constrained that AI's have to view us as competitors rather than partners, I hope they conceive a way of dealing with us more humanely than we've dealt with each other. I'd rather not see us Matrix-ized. Have you been in an egg factory farm lately?HarCohenhttps://www.blogger.com/profile/09461182873868141978noreply@blogger.comtag:blogger.com,1999:blog-8587336.post-1131634743754861752005-11-10T06:59:00.000-08:002005-11-10T06:59:00.000-08:00I think the danger posed by AIs with deep-embedded...I think the danger posed by AIs with deep-embedded command was summed up pretty well in Jack Williamson's "The Humanoids" about 50 years ago. What that books lacks in subtlety it makes up in lack of subtlety.<BR/><BR/>Talking about nanos and AI together, though, got me thinking about an old RAND paper I read, pertaining to "Fire Ant Warfare":<BR/><BR/>http://www.rand.org/publications/MR/MR880/MR880.ch8.pdf<BR/><BR/>I'm not entirely convinced that the future of AI is anything that we could identify with, anymore than we can identify with a beehive. Generally we do not think of beehives in terms of "ungratefulness". A really smart beehive might try to use us or out behaviors to its advantage, but...<BR/><BR/>I guess that's the difference between "intelligence" and "sapience".Anonymousnoreply@blogger.com