Monday, February 14, 2011

Progress marches on... and "if it ain't true..."


“We live naked on the internet…in a brave new world where our data  lives forever,” writes John Hendel in The Atlantic. Americans have come to accept this, but Europeans are trying to maintain a legal “right to be forgotten” -- even a “right to delete” one's data trail. The bizarre assumption that anything "erased" will stay erased is nonsense. The mighty will never let themselves be blinded. A so-called 'right to delete' only guarantees the elite castes a permanent advantage.

Can you turn off the internet for a day without suffering Information Withdrawal Syndome? Its effects are similar to those seen in drug addicts.

A map of the Twitterverse: an evolving ecosystem  by Brian Solis.

Facebook topped Google as the most visited website of the year, accounting for 8.9% of all visits in the U.S. for 2010. Google accounted for 7.2% of visits, and Youtube 3.3%     

Wired's Clive Thompson bucks current wisdom, suggesting that Tweets and Texts actually serve as catalysts for subsequent in-depth analysis: "We talk a lot, then we dive deep."  Oh, I am sure that's true for some.  Still....

How do we define wealth? More and more, in the modern world, reputation is wealth: The Whuffie Bank (a term coined by author Cory Doctorow) aims to build an economy based, not on productivity, but on reputation -- which could be redeemed for goods (real and virtual) and services.

Quora: a site dedicated to questions and answers created and answered by users. My page is:

A graphic comparison of Facebook (500 million users) vs. Twitter (106 million users)

Room for mistakes? There are applications where a margin of error is acceptable. The payoff for allowing imprecise calculations is that computer chips could operate thousands of times faster.

Time for a return to cursive? Less legible fonts promote better recall of information.

Is there an afterlife in cyberspace? What happens to one's digital identity after death? Increasingly, the record of our life is online: photos, videos, musical creations, posts, tweets, opinions, manuscripts, avatars -- an unsorted, chaotic mass of digital expression. This will become profound legal implications in the near-future. Businesses are attempting digital afterlife management.


Welcome to the Information Age  Each day we are bombarded with the equivalent of 174 newspapers a day. There was a time when most of the news you needed landed on your doorstep each day.

The Coming Data Deluge:  Petabytes, exabytes and more. Our vocabulary expands to keep up with data-intensive science, such as mapping the brain's neurons, and sensors scattered across the Earth, tracking climate, toxins and ocean currents. The Large Hadron Collider will generate about 15 petabytes of data per year.

How much data can the world store and compute? See an attempt to quantify the world's capacity to compute...


My latest Youtube video: A look at sci-fi movies. Who was the bad guy in the movie, E.T. The Extraterrestrial? How about District Nine? Sometimes, the villains are obvious, as in Independence Day or Lord of the Rings - in other flicks, the real bad guy may not be who you think it is.

Inspired by news of uprisings overseas?  How about 12 Revolutionary Uprisings from Sci-fi movies, TV shows and books:  From Star Wars to District 9, I, Robot to V for Vendetta.

Five scientific reasons the Dark Side Will Always Win, by Paul & Trevor Pickett on

5. The color black is scientifically proven to intimidate people
4. Thinking evil thoughts & clenching your fist makes you stronger
3. Arrogance inspires confidence
2. Doom & gloom makes you smarter
1. Speaking with a deep voice gives you power

To which I would add Five Reasons the Good side will win:
5. Evil guys get better clothes, but messed-up faces. Good is always pretty.
Pretty always equals good.
4. Red glowing eyes really sting after a while.
3. Your underlings (whom you've Force-strangled) will sabotage the targeting sites in your special TIE fighter.
2. Wimpy American audiences can't stand unhappy endings
1. Really easy to convert new converts from dark side. They're so dumb, they'll believe unlikely stories about being someone's father.


Evolutionary anthropologist Robin Dunbar claimed the size  of the average human's social network is 148 - correlating the size of the average human neocortex vs that of a primate & the size of species' social groups. Dunbar also said that in order to maintain a cohesive group, 42% of the group's time would be devoted to social grooming. Does nit-picking count? Clearly I'm going to have to unfriend a lot of 'friends' on Facebook...

Nit-picking? Studies of lice DNA shows humans first wore clothes 170,000 years ago

Time declared that 2045 is The Year Man Becomes Immortal with an article on the Singularity and Ray Kurzweil's vision for "humanity's immortal future."  Hm... well... maybe that's the year a baby will be born who will be the first immortal.  I am a really far-out thinker. But these singularity guys are just too UTTERLY similar to the wild-eyed transcendentalists who were around in every era.


What if the world's population was reshuffled so that citizens of the most populous country (China) would spread out across the country with the largest land area (Russia). Members of the world's 2nd largest population (India) would move to Canada. Third in population, Americans would stay in the United States! Fourth in population, Indonesians would move to China. Why should Australians get so much empty land? Now they would move to Spain, and poor, overcrowded Pakistanis would take over Australia. It's like a game of Risk gone wild.

63 million video game consoles in U.S. homes consume as much energy in a year as the city of San Diego. Can't we hook those thumbs up to a generator? Seriously, part of the problem is games left idling. Gamers lose their progress when they shut down. My son is always saying, I can't stop now Dad….

A wonderful resource: everything you ever wanted to know about primate skeletons (developed by the University of Texas at Austin). You can also look at specific bones: i.e. compare the scapula of a gibbon to that of an orangutan.

Oriental hornets harvest solar radiation for energy. Chitin structures in abdomen trap light, bouncing it between layers. A pigment, xanthopterin transforms light into electrical energy.

====    =====    ===

I plan to write about the New Arab Rising soon, including a suggestion that is both profoundly radical and and immensely, world-changeingly practical. (Some of you who have read EARTH might foresee what epochal event I may be talking about.)

Meanwhile though, time to spread some cheer about the onward march of technological progress... plus a few worrisome problems that need solving.


The Danger of Cutting Federal Science Funding.

Ceres has something to say to Pluto.

Can Vacuum have friction?  Maybe in spinning metal systems.  A phenomenon related to the Casimir Effect. Calling to mind the “spindizzies” that propelled starships in James Blish’s CITIES IN FLIGHT.

Sometimes SNOPES is great simply as a primary source for stuff that “if it ain’t true, it oughta be!”

Lots of interesting stuff!  Stay tuned...


Dave Rickey said...

On the "sloppy computing": Something about that really sets off a certain kind of programmer. I once was designing a weak a-life system for a game, and I was constantly fighting with the programmer for the module because he couldn't understand why I wanted to use integer math, because it meant errors would creep in.

Of course, it also ran about 20 times as fast as the high-precision floating point math he wanted to use, which meant I could have 20 times as many layers to my system for the same number of CPU cycles.

It's been my experience that there are two basic types of programmers: Mathematical thinkers, and procedural thinkers. Mathematical thinkers can come up with really elegant algorithms to do things efficiently, but there are certain types of problems they just can't cope with, and will throw up their hands at. In this case, the simulation I was trying to create would have been an N^2 problem done in the traditional way, but was a linear problem space done my way.

I might add, if I'm understanding this article correctly, this type of chip would work very well for that application. The "noise" injected by the errors would probably make the system work better, actually.


Paul said...

A) In the real world the Dark Side didn't win. Even when led by hyper-confident, black uniform wearing, clenched fist, genocidally dark thinkers.

(But Winnie had the deep voice.)

2) Redistributing countries. Cute and illustrative. But as an Aussie, I know that surface area ain't the only criteria. Australia, Canada, Iceland, Mongolia, etc all have a low carrying capacity. If they didn't, they wouldn't have such low population densities. In fact, Australia is considered to be well over it's true carrying capacity - over-populated.

(It would be interesting to see how closely theoretical carrying capacity correlates with true population. See where the over- and under-populated parts of the world really are.)

iii) "humans first wore clothes 170,000 years ago" That's a long time to be worrying about being in fashion. Other animals, if you're not in fashion, even after a good groom, you can't do anything about it, so relax and be a beta. But humans, we can potentially improve our genetic lot, so cue the stress.

#) Re: Gaza and the refuge camps, from the last thread.
I'm not touching the politics, but speaking of swapping populations... I wonder if it would be cost-effective for Israel to buy some territory (with sovereignty) somewhere in, say, Africa. Build some road and rail infrastructure, some housing/schools/etc, and offer large bribes to individual Palestinians to move there. Initially from the refuge camps, later Gaza. With the aim of developing it as an quickly independent Palestinian state.

David Brin said...

Dave I once invented a most-significant bit-first serial adder. Fantastically efficient because it started with an estimate and then built additional bits of answer. You could command it to stop at any point and that's how accurate it would be. No one was interested. I thought it cool!

Paul, in 1949 lots of displaced arabs wanted to start new lives elsewhere and were kept locked in the territories, as pawns. That crime helped build a people who are now more part of the character of the land than they were before. No one is going to expel the Palestinians. It is their home and I oppose the assholes who want them out.

It is time to make them so prosperous that Israel-Palestine is the powerhouse of the region.

And I really want to quit the topic for now.

Paul said...

"No one is going to expel the Palestinians."

I'll drop it. I don't want to get into the politics of it anyway.

But just for clarity (not for argument, I promise) I meant "invite" not "expel", purely voluntary, just as a circuit-breaker (*); and I meant the refuge camps in Jordan/etc not the West Bank.

(* Psychologically, having another option might make the whole thing feel less desperate for the Palestinians, even if they don't take the option. Staying-by-choice instead of back-against-the-wall.)

Moving on... no no really...

Re: The sloppy computer.

Wish I had a link, but I remember hearing about micro-bot research in the 80's. (You know, insect bots.) One researcher switched to analogue circuits, because he found that those bots didn't freeze up when they hit conditions that changed too fast for them to process. Digital is accurate, but analogue doesn't crash.

So I've wondered since if computers in critical applications need an analogue "sanity" chip that mirrors the digital one but at a lower accuracy. The accurate digital output gets preferentially used until the results between the chips diverges wildly (such as during a crash.) Then the analogue chip jumps in, resets the digital chip to its near-enough values, and sets it going again.

Dwight Williams said...


Forgive me for hovering over this for a moment, but I like several particular, optimistic ideas of the future that this particular hyphenation implies for both of those peoples. Here's to their having one of those better futures!

I'll get back to you on some of your other points tonight(East North American Standard Time)...

Tony Fisk said...

wrt font type. It has always intrigued me that 'standard wisdom' dictates that thou shalt not use serif on screen. It might have sense in the days of chunky 640x480 pixel screens, but I always thought those little bits hanging, like guano, off the eaves of various letters made them more immediately recognisable than the vanilla strokes of Arial.

Or Comic Sans...

Winnie's deeper voice might have made a difference. It might also have been a case of 'close but no cigar!'

Sloppy computing may have its uses. On the contrary side, it was the rounding errors fed back into a simple but high precision weather simulation which demonstrated how 'insignificant' changes could snowball and led to the description of the 'Lorenz attractor'

Patricia Mathews said...

But it's on You Tube...

Blast it. Everything today is on You tube, including some things that would go very, very well in print. Because I do NOT process information well through my ears, especially when modulated through tinny little came-with-the-computer speakers.

Anybody doing closed captioning for You Tube has my business.

rewinn said...

Putting together the shortsighted attack on science funding (one would think nationalist conservatives would WANT America to be #1 in science if only to help to build their war toys) AND the concept of vacuum friction (I can't pretend to understand the math, but the general idea makes mind-stretching sense):

As science moves through political space, it encounters resistence from hollow minds a.k.a. "science friction".

(Although to be realistic, it seems to me that the hollowminds are merely the ground troops for an Aristocracy that is not opposed to science per se, but is perfectly content to buy technology from other nations, especially when an educated electorate threatens their rule. As a patriot, I don't know what to do about this, except to embarass the hollowminds with humor in the hope of peeling off a few.)

Ilithi Dragon said...

Tony Fisk said...

wrt font type. It has always intrigued me that 'standard wisdom' dictates that thou shalt not use serif on screen. It might have sense in the days of chunky 640x480 pixel screens, but I always thought those little bits hanging, like guano, off the eaves of various letters made them more immediately recognisable than the vanilla strokes of Arial.

!)$#*@^%#)!%$*!$!! I #@$&ing HATE sans-serif fonts! NOBODY gets my name right because of it! Serifs exist for a godsdamned reason! @#$&!


On the 'sloppy math' chips: My immediate thought was AI processing for a variety of applications, but foremost for gaming (which I suspect will be a major contributor to the development of sentient AI programs). My second thought was an AI processor card, much like a graphics card or the short-lived physics processor cards (rendered largely redundant with multi-core GPUs and the increase of dual-card (and tri-card and quad-card) systems). A 'sloppy math' card could potentially allow for much more human-like 'fuzzy logic' in AI applications designed to use it.

Corey said...

I still think one of the more important questions to ask ourselves is how we're going to fit potential advances in computing technology into a static framework of actual computing power.

I'm still not sure that we've managed to move ourselves past the Moore's "Law" paradigm of assuming endless growth in computing power, even as brick walls stare us straight in the face, and not brick walls that we'll maybe kind of sort of hit in 20 years, but ones we've already been hitting for the past few years.

Maybe some background would help.

Back in 2003, there was actually a paper warning that quantum tunneling was going to severely limit our capability for miniaturization (nearly the sole source of the computing explosion of the past four decades), yet, at that time, processor fabrication for consumer chips was still stuck in 1.3 micron territory (Intel Sandy Bridge is 22nm, by comparison), so not much attention was payed.

I even heard people talking about how we'd "engineer around" then quantum tunneling problem... because, you know, you can TOTALLY engineer around quantum mechanics (not).

Right about that time, Intel was proudly boasting that they'd have "10ghz CPUs" around that time, and predictions were coming out that by 2020, computers as powerful as the human brain would be available for $100 (and for 10 cents by 2030).

Around that time, I remember watching an episode of the TechTV [remember them?] show, Screensavers, where a computer scientist claimed that the most powerful computers we had were about as smart as a cockroach, "and not even a particularly bright cockroach", but hey, it seemed like things would just get faster and faster forever.

Then, of course, Intel's Netburst slammed full speed ahead into a heat wall, as the correlation of clock speed and temperature got the best of them, and their 3.8ghz chips were running so hot, that they actually had to build in dynamic underclocking to keep the CPU at safe temperatures under load. Since then 4ghz has remained a consistent barrier, reachable only through overclocking, usually with beefy aftermarket CPU coolers.

Still, die shrinks continued, some inefficiencies were worked out in computer architectures (replacing north bridges with integrated memory controllers being one of the biggest), and multi-core CPUs were taking off.

GPUs at this point were literally doubling in power every single generation. CPUs moved to 90nm, then to 45nm, then to 32nm; GPUs went to 65nm, then to 55nm, then to 40nm.

Then 2009/2010 came along, and the problems started. TMSC couldn't get 32nm off the ground for GPUs, so they had to wait until 28nm. AMD Northern Islands (Radeon HD 6000 series) had to be released as an interim chip to keep the market going with beefier 40nm chips to try to compensate for the huge delays in new processes. Despite their best efforts, the 6000 series was only marginally faster than the 5000 series, instead of the usual doubling. Nvidia, meanwhile, still hasn't even gotten a good grasp on the 40nm process, leaving little hope of a 28nm Kepler coming out anytime soon.

Intel released Sandy Bridge, finally, but later than desired, while AMD still struggles with their 22nm Bulldozer chips, and their new mobile Fusion chips stay grounded in 40nm territory.

At this point, I'm doubtful that chips will move beyond even 22nm for another three of four years, and even then, the halving of CPU die size every 2 years that Intel based all their roadmaps on is nothing but a pipedream now. We'll be lucky if 18nm is forthcoming before mid-decade, let alone anything like 11nm, which might not even be possible (quantum tunneling gets lethal to processor operation at around 10-15nm). So hoping for even ONE MORE halving in size is probably false hope at best, to say nothing for any further such shrinks afterward.

Corey said...

So the question I have is where do we go from here?

How do we continue progress in a world where the demand for computing power will continue to grow, but the density of computer power remains effectively static?

It's not like society won't cope. We've witnessed countless other technologies reach maturity and slow down to incremental progress, but how is that going to affect how we view the future, with computing power being so central to our society?

Sure, it means the idea of "The Singularity" can pretty much be shelved indefinitely (good ridance?), and human-like AI, something I'd rather look forward to, probably aren't in our near future, but then where does that leave us?

Acacia H. said...

One thing that always puzzled me about the end of "Foundation and Earth" was how R. Daneel Olivaw claimed his positronic brain could not be improved further, and that each new "upgrade" ended up having a lower life span. My thought? If you can't build smaller and more complex, why not build larger? Why not have Olivaw's positronic brain exist in a starship body, and utilize a humanoid robot form as a remote unit? You could even move backward into less advanced positronic territory that compensated through being larger.

What does this have to do with the quest for nano-size CPU circuitry? If you can't grow smaller, grow outward. Sure, you end up with larger computers, but if they're more powerful and are utilized for such things as servers (which originally were huge, mind you), then they can be used for the main processing power and remote terminals can connect into the network.

Robert A. Howard, Tangents Reviews

Corey said...

Robert, this can be done to an extent, but it hits brick walls even faster than miniaturization will.

Again, keep in mind that we discuss growth in computing power in exponential terms (basically how many powers of two things can be taken out to, as we like to express everything in doubles and halves).

For your average computer, keep in mind that TDP (thermal dissipation power) and size should have a linear correlation. Double the die size and put twice as many transistors in, and you get double the heat if each works as hard (at least, I assume that's the relationship).

This creates a problem, however, because while you could probably double or even quadruple CPU size in a computer and have it be semi-practical, that's the ABSOLUTE limit right there, because after that, no ATX case (even a full tower) would ever be able to dissipate the heat. So what about making bigger cases? Well, again, how many times you can you really double things doing that?

Maybe you could get things eight times larger in a home computer, but at that point, the thermal dissipation would be so great, that your PC would be a literal space heater (literally, we're talking something like 1000W at that point, 66% of a 1500W space heater on full blast). At that point, you couldn't operate it in the summer, because it would heat your house to unbearable temperatures, and fry itself in the process.

So now, for a mere three or so doublings of power, basically only the amount of processing power gain we've seen since maybe 2002-2004, you've now turned your nice, semi-portable ATX desktop computer into a wall-mounted monstrosity that has to literally vent its heat to the outside of your house. You'd literally have to install it like a fireplace, to say NOTHING for the power bills.

Corey said...

Maybe there is a partial solution, however.

I remember one of the big things going for electric cars like the Nissan Leaf is that they wouldn't actually draw any additional power up to a point, because coal power plants run at full blast all the time, even past peak hours, generating useless power that these would suck up.

Maybe that's the solution to increasing needs for super-computer style power, especially if budgets are limited (I'm thinking mostly for researchers here). Maybe the thing to do is to treat the sum of home computers, from PCs to game consoles, as part of a larger "computing grid".

Folding@Home is already the most powerful computer on Earth, considerably more powerful than the Cray XT-5 ( Why not take further advantage of that?

We probably put something like 90% of our computing capacity to waste, even when our computers are on (and I say this as a gamer, who uses his computer for a lot of intensive stuff). Perhaps we should start taking distributed computing more seriously, take better advantage of our finite resources. It would be a start, at least.

Rob said...

@Rob H., that's essentially what multicore CPU's do, since the emergent problem with miniaturization is how to dissipate the energy. Instead of one CPU core, simply site two or four or twelve on one chip die, and rely on the software geniuses to use all that stuff.

I tellya, though, as a software guy, it's a doozy. Single CPU stuff is much, much easier to debug.

Corey said...

Correct me if I'm wrong, but isn't multicore computing subject to Amdahl's law (heavily diminishing returns) when you add more and more cores?

David Brin said...

Corey gets post of the day! That was a terrific update on how Moore's Law is running into a stiff head wind -- even a wall -- in the race to reduce chip line resolution. When I was a working expert in the field (seriously! I helped invent...well, worked on the team the made early CCDs) we were fighting toward 1 micron resolutions. Now they are approaching 1/100 that? Yeesh. No wonder the quanta are rebelling.

What's needed of course, is for new accomplishments to come in the area that's always lagged... software. Of course, parallel CPU processing is also key.


Robert... Asimov kept rebelling against the "solution" to intelligence that he came up with the previous decade. (I LIKED that!) In fact, I dissected this in FOUNDATION'S TRIUMPH.

Asimov needed to concoct inherent limitations to robot abilities, or they would simply become gods. They were ALREADY gods, de facto. Some of Isaac's solutions were bizarre and half-baked, like psychic powers. He never really trusted his first and most ingenious solution...

...which was typified by his first invention. The First Foundation established on Terminus to use HUMAN SOCIETY as the great engine of wisdom-generation. I find it weird that it was his first notion... and he spent the next 40 years writhing and twisting to find new ways not to trust it.

David Brin said...

Been watching the jeopardy event.  Fascinating!

A commentary I found illuminating is by Kent Pitman:

I believe the one unfair aspect of the show was the input-output discrepancy.  Input should have been speech recognition. As Kent points out:

"IBM actually sells voice recognition software. This should have been a chance to showcase it."

Also, I believe Watson should hand to send signals to a "hand" to push the regular buzzer, same as the humans.

BTW... I have a particular use for speech recognition.  I attend a lot of conferences at which attendees have their laptops open in front of them, fiddling away while the speaker speaks onstage. (Funny about that; it would be rude if they read a newspaper!)  Thing is, I hope to offer a service at one conference where the people with laptops could have a window open while a speech recognition program scrolls the speaker's words as they are being spoken onstage.  This would let people take notes by selecting and copying passages as they scroll by.

Yes, I know it would have many errors! The notes aren't expected to be accurate or perfect, just a way to save talking points for later consideration.

Does anyone have a notion how to proceed with something like this?


Acacia H. said...

To be honest, I've never understood the knee-jerk reaction that the majority of scientists have concerning psychic phenomena. The claims that we cannot directly measure psychic abilities is not even a valid argument seeing that when theories about the atom first came about, the tools needed to measure them were not available... and even today we do not have the tools needed to directly observe dark matter and dark energy (only their effects). Physics already has some fairly bizarre notions that have been proven - such as how observing an effect will influence the effect in question. So, why the disdain toward psychic phenomena, outside of its ties with religion?

As for Asimov's inability to accept his first notion concerning human society as an engine toward human development... given humanity's tendency to screw things up when given a chance, he had to create methods of dealing with humanity's screwups. The irony is that the very thing that disrupted the First Foundation was the very thing needed to keep it on track: mental manipulation abilities (psychic phenomena). If the Mule didn't have psychic abilities, if he had just happened to have a technology breakthrough that he used to conquer a region of space, then the mathematics behind psychohistory would have predicted such an occurrence and found a way around it.

Which might actually be an interesting rewrite of the Foundation Series: what if the Mule had used scientific methods in his conquest of regions of space... and methods that the Foundation would have used to get around a "rogue" scientific discovery allowing for a warlord gaining military ascendancy in the region.

Rob H.

Ilithi Dragon said...

Aren't there programs that allow documents to be shared on a network, with the ability to update in real-time? I don't think there is any existing application that streams a StT program's output into a shared 'chat' interface, but combining a real-time-update doc sharing program with a speech-to-text program should give you more or less what you're looking for, albeit in a rather cobbled-together fashion.

Ilithi Dragon said...

Hmm... Something like Google's EtherPad might give you what you want, in combination with a StT program.

rewinn said...

News on the Citizens United front (representing a battle between humanity and the artificial life-forms called "corporations"):

It has recently been discovered that at least one of the federal Supreme Court Justices ruling in that case previously benefitted from campaign money spent on his behalf during his confirmation hearings, according to a supplement to a bar complaint.

I deliberately did not state whether the Justice voted for or against the outcome, since either way, the standard is not actual partiality for/against a party, but the possibility of a perception of partiality. Judges must be more pure than pure, because from them there is no appeal.

OTOH artificial life forms, whether robot or corporate, may not feel constrained by the Three Laws.

Tony Fisk said...

Could corporations get nominated to the judiciary?

Ilithi Dragon said...

Murray Hill, Inc. is running for Congress, so why not the SCOTUS?

David Brin said...

Robert, there are several fundamental reasons to doubt psychic phenomena. The CONTINUITY EQUATION is fantastically successful at measuring what is IN a box versus flows of stuff across the boundaries of the box. It IS the thing that led us to suspect the power of the atom. The human brain has many inputs. But what crosses the boundaries of the skull all seems accounted for.

Moreover, extraordinary claims demand extraordinary evidence. As science gets better, the feats CLAIMED for Psi keep getting smaller… just as the UFOs keep getting more skittish and hard to see as cameras flood the Earth. Hmmmmm

What if the Mule had used science?
People who find such re-imaginings interesting should not miss Harry Potter and the Methods of Rationality by Eliezer Yudkowsky
It is fascinating and vastly better-written than the originals.

Ian said...

Faced with budget restrictions, NASA engineers are rethinking future human space missions.

Nautilus sounds like a great idea - use existing technology as much as possible to reduce cost and development risk; make the vehicle reuseable; use a single design for multiple missions by switching out specific subassemblies.

But how many great paper spacecraft have we seen over the years?

rewinn said...

I don't see any text in our federal constitution that limits what the President may nominate as a candidate for any office (except a replacement VP would have to fit the qualifications for President).

And I don't see any text limiting the Senate's power to "advise and consent" except for the then-existing rules of the senate.

Until a President nominates and a Senate confirms a corporation for federal office, there would be no "case or controversy" under which the question may be decided, so we'll have to wait until corporations take their rightful place on the Supreme Court before suing over it. Since nothing can compel a Justice to recuse him, her or itself from a case, we would have to simply hope that Justice Murray Hill will voluntarily recuses itself or rules with more impartiality than contractors show in administering federal contracts.

An advantage of corporations holding federal office is that they could hire contractors to fill those positions. When a contract Justice wishes to retire, instead of a messy and complicated nomination and vetting process, the corporation would simply locate another contractor. If special expertise is required, the corporation can substitute an employee filling the bill. And let us not forgo the benefits of outsourcing! With advanced telecommunications, we can save money by contracting out judgeships to call centers in other nations!

What Could Possibly Go Wrong?

Ian said...

Question re the Moore's Law discussion and the overehating issue - and I'm a total non-expert here.

Could the problem be addressed - for low-performance home and office applications - by shifting to cloud computing?

If most of the actual computation is being done in server farms, size and heat are probably more manageable issues.

Rather than that 1000 Watts cooking your house 900 Watts is generated in a water-cooled server with an energy recovery system.

I suspect too that we may be approaching another sort of barrier - but the good kind - will demand for processing power increases in games applications stop once the machines can render high-def holographic 3D with full surround-sound?

Sure we will continue to want more power in scientific applications but seriously smartphones have reached the point where the principal barrier to further miniaturization is the size of the human finger.

Tony Fisk said...

wrt computer power and parallel computing.

Having referred to Flannery's latest tome in relation to where the megafauna went, I now want to point to a transcript where he and Robyn Williams discuss microfauna, and some real-life examples of parallel processing. Dig this excerpt:

Robyn Williams: It's like there's a kind of super intelligence as well. It's not just the individuals who might be tiny in their little brain powers, but you put it together and you've got something that's almost like an internet, something like a connecting system that makes their civilisation overall work as one.

Tim Flannery: That's right. It's interesting that in a sense that's where we and the ants start to differ, because there is no brain cast, there's no overall controller if you want of an ant colony. But the ants themselves are all genetically very closely related and they communicate with each other through pheromones. There are about 40 different pheromones that ants produce, and those 40 pheromones, along with about a dozen visual signals, provide more than enough interconnectivity between the ants to let them work seamlessly. Ants of course have democratic processes; they actually vote. They use these pheromones to vote!

Robyn Williams: You're kidding!

Tim Flannery: No, no.

Robyn Williams: Do they get a tie as well?

Tim Flannery: No. There's always a winner. They often vote when they've got to move into a new home, and of course the size and location of that home is tremendously important. Each individual ant is not very well able to assess that, and they use a method called Buffon's Needle Theorem to measure the size of this chamber they're going into. That basically involves just walking around the chamber till they cross their own pheromone path a certain number of times. They'll make a guess as to whether it's big enough for them or not and come out, and then they'll go to another one. But eventually a critical number of ants will have gone to one particular hollow that is sort of the best; on average it's got the most votes, you see the strongest pheromone trail leading to it. Once that occurs, the ants just pack up and follow that trail, go to their new home. But it's a decision that's actually critical to the survival of the whole colony, and it can only be made intelligently through pooling experience and knowledge.

Acacia H. said...

Here's an interesting (11-minute-long) video on solar technologies, including solar concentration towers in Spain, and the revelation that solar power concentrators in the U.S. that utilize heated oil to generate power and that paid off their loans are now generating power at around $3/kilowatt hour.

And it also points out a rather valid fact: without government handouts, coal, gas, and other power plants would be a hell of a lot more expensive, and electrical power costs would likewise increase in turn. Note, Obama stated his intent on eliminating subsidies to Oil... and Congress voiced its disapproval at his suggestion.

Rob H.

Rob said...

With respect to Amdahl's law, yes, there is a point for any program where the overhead of switching between the different threads of execution outstrip the actual computation being done.

More and more I answer questions like this, and assumptions expressed like "What's needed of course, is for new accomplishments to come in the area that's always lagged... software. Of course, parallel CPU processing is also key," with this:

"Software is hard. It's harder than hardware."

The two dovetail. I studied parallelism back in college almost 20 years ago; the problems and their basic solutions have all been modeled, and in fact, Microsoft has what amounts to a set of solutions for it in their latest "Task Parallel Library" for us developers. I did some research items today using it.

We'll be awhile catching up, we software guys. The basic problems are solved using mathematical reasoning, which is not something that many Americans get good training for.

Tony Fisk said...

I haven't been following the nitty gritties of parallel processing, but get the impression that the main thrust at the moment is a 'matrix' approach (each processor given the same problem with different bits of input to solve)

From my (25 year) old University notes, I recall an alternative 'queueing' approach which had instructions pulled through as a processor (and the relevant inputs for the instruction) came available. I rather like this notion as it fits with how programs currently run on single CPU systems, and wouldn't need a vast rethink in how computer languages operate (although languages might require augmentation to make full use of the facilities)

Tricky, though: each instruction would need to know what other instructions it was dependent on at any given time so that the queue could be ordered efficiently. The process of working that out would require a few cycles as well... which is what Amdahl's Law is getting at?
Hmmm! I wonder if 'Aspect Oriented Programming' would be useful here?

Tony Fisk said...

Adherents of snotty elves and methods of rationality might like to check out 'The Last Ringbearer' (although, if Mordor really was the bastion of science and reason in Middle Earth, surely they'd let in a little light in over there?)

surric: an orucean sage. Father of sporc.

Tony Fisk said...

Robert, you may have heard of a few extreme weather events that have been trashing large sections of eastern Australia?
The gov't has been raising a levy to foot the damage bill and has been cutting certain intiatives (like the solar flagship program) as well.

What a lot of environmental groups have pointed out is there is a substantial ($600mil) tax exemption on a program to convert crude oil, which hasn't been touched. This exemption was put in place to assist a fledgling industry... thirty years ago! I think it's time it can stand on it's own.

Tony Fisk said...

Speaking of extreme weather events, here we go again!

David Brin said...

Someone follow this for us!

Tony Fisk said...

Follow up...
More on 'Tyche', a hypothetical super-jovian mass body in the Oort cloud, whose existence is questionable, although it's been talked about for a decade.

(trivia: when looking at Oort cloud diagram, 1ly ~ 65,000 AU)

forwr: a hypothetical moon orbiting Tyche. In the Oort cloud, the inhabitants would be very blue!

Corey said...

Rob, I have another question regarding parallelization.

I realize there are inherent advantages in terms of processing efficiency to parallelization, and SIMD/MIMD computing, but even puttin Amdahl's Law aside for a moment, isn't there a point at which paralellization gets to be like trying to spin straw into gold?

What I mean is, that for a given architecture, your clock speed dictates that (iirc) one transitor somewhere in your chip flips its state at a given frequency. In a 3ghz chip, you get 3 billion changes of state per second between all your transistors.

Adding cores, unless you're talking about making a bigger CPU, doesn't fundamentally change that. Is it not a case where a single 1 billion transistor processor core should have the same exact theoretical processing power as two 500 million transistor processor cores?

I realize things like paralellization, out-of-order instruction, and what not probably really help in taking advantage of that theoretical processing power to most efficiently use it for a real world task, but SURELY there's a point at which you're using the processor at a high enough efficiency that taking the same number of transistors and having them execute the same code in an increasingly paralellized fashion doesn't get you anything futher.

Afterall, at the end of the day, is it not still not the same math problem to chunk out, with the same number of transistors to do it? It's the same reason why Hyperthreading (splitting real cores into virtual cores) doesn't get you anything past a point, and shouldn't no matter HOW paralellizable your code is.

If you're not actually adding transistors, there should come a quick point of no benefit to having two cores execute two piece of a problem half as fast, vs one core exectuting one piece at a time twice as fast.

Is this not so?

Corey said...

A lot of great conversation by the way!

It's nice to be past the topic of Israel, and onto something I actually KNOW about and can comment on :D

Tony Fisk said...

Does anyone want to follow this?

US Uncut (deriving from UK uncut.)

"Now is the winter of our discontent made glorious summer by this son of York!"

- someone who may or may not have been a bad guy.

gedep: see GetUp

rewinn said...

On parallel computing and whatnot: coming from a point of near-ignorance, can there be a large number of problems that can be essentially solved with foreseeable amounts of computing power?

For example, for Matrix-level of sensory stimulation, we can (almost certainly) get sound and vision approaching the limits of human perception; the chemical senses may be harder but the limit may be in the output device (molecules) more than in computing what the appropriate molecule may be; haptic and gravitic effects may be even harder but, again, it may be an question of the output device (e.g. gravity generator) rather than computing what the gravitational effect should be.

Other important problems, e.g. navigation & collision avoidance for everyone's personal aerocarmarine, may well be solvable by giving each vehicle processors foreseeably smarter than what we got. So what's left?

Now I'm sure there must be problems requiring hugely more processing power, e.g. computing the position of every molecule in the galaxy, or simulating a convincing conversation between Mother Therea and N*SYNC. And who knows whether some desirable technology, e.g. FTL based on computing quantum effects, may depend on way way more computing power.

But setting aside these huge and/or speculative efforts, might not a pretty good human civilization be realizable with "good-enough" computational infrastructure?

Tony Fisk said...

And who knows whether some desirable technology, e.g. FTL based on computing quantum effects, may depend on way way more computing power.

'Eschaton.. says... no!'

Duncan Cairncross said...

Hi Robert

"and the revelation that solar power concentrators in the U.S. that utilize heated oil to generate power and that paid off their loans are now generating power at around $3/kilowatt hour."

The guy said 3c/Kwhr - bit better than $3!!!!

You had me worried for a bit!


I agree about U-Tube, the problem is we are used to reading at about 1000 words/min (I will be amazed if any of the usual commentators are less than 700 words/min) and speech is about 70 words/min

We would much rather have the transcript!

I think Dr Brin's work on speech to text for conferences is part of the same problem
Speech is just so damn slow!!

sjmalarkey said...

The comments' characterizations of Amdahl's law leave a lot to be desired. The law actually states that if there is a portion of the computations that cannot be parallelized, the time for that portion will eventually dominate. The WikiPedia article referenced elsewhere is actually quite good. It's also true that for come kinds of calculations there are ways around it. John Gustafson pointed out such a way in 1988 at Sandia.

The biggest problem is that we don't know how to use parallelism for many problems, and when we do, it's really hard to program them.

You should probably look up Moore's law when you are checking WikiPedia. It doesn't state that things get faster with time, just more compact. In the past, that's meant faster, too.

None of this has anything to do with imprecise processing, which is a technique that's been used for years in signal processing. People have gotten out of the habit of thinking that way since it hasn't been necessary and it didn't help much when implementing computations in a standard CPU. It's interesting to see that it's being brought out and dusted off again.

Unknown said...

Reason 0: "Evil will always triumph, because Good is dumb!" (Space Balls)

Corey said...


The problem is that, while cloud STORAGE is a wonderful thing for smaller files (as is known to users of Youtube or Flikr), it's bad for constant access of large files due to absurd latency and bandwidth issues.

For computing, the bandwidth and latency issues get even worse. It also doesn't scale well. Imagine trying to service just 10,000 people from a single server; it would take an absurd level of computing power, and would be considerably more expensive than just having them each own one of 10,000 PCs.

Rewinn said:
"But setting aside these huge and/or speculative efforts, might not a pretty good human civilization be realizable with good-enough' computational infrastructure?"

Rewinn, this question is not only fair to ask, but applies to far more than just computing.

Our material-driven, industrial capitalist society's biggest pathology is that its entire existence revolves around a fallacious assumption of never-end growth for the sake of never-ending growth.

It's why we're willing to overpopulate our planet, destroy our biosphere, and wage war over finite resources, all actions that, without a doubt, cause a net decrease in the average quality of life of humans (and other assorted sentient species, if we're talking the entire biosphere).

I think our entire fixation with Moore's law is just a symptom of this larger fallacy of thinking, that assumes the way to progress humanity is just never-ending expansion of industrial and economic capability. In reality, technology as a whole will move ahead without continuation of Moore's Law, and we'll find new and better ways to employ that technology the grow as a civilization, but in ways that don't necessarily just boil down to 19th century notions of greater economic capacity. Who knows, maybe the end of Moore's law is a good thing, as it will help wake us from this horrible self-perpetuating nightmare in our thinking about how best to grow and improve as a global society.

netsettler said...

@rewinn ...

Since you mention corporations and the three laws, you might be interested by my 2009 article Fiduciary Duty vs. The Three Laws of Robotics, which takes the strong position that not only are so-called legal people (that is, corporations) not bound by the Three Laws, but in fact they are required by the rules of law in modern business to act like sociopaths. That is, they are not allowed to be what most people would call ethical, caring about entities other than themselves, even if they want to.

(David's post to Contrary Brin titled Jokes, predictions and serious prospects for a changed world, also from 2009, contains his thoughts at the time in response to my article.)

--Kent Pitman

Tim H. said...

Moore's law also works to reduce the energy required for a given task, my core2duo mini is overkill for what I do with it, but uses less power than the PM7500 would use, in monochrome at this resolution, to do the same thing. Other areas of technology are also much more efficient, the 15~20 HWY MPG that used to be acceptable for a family car, is now considered thirsty for a muscle car. 100,000 BTU furnaces are commonly replaced with 40~50K units, which do a better job. I see a workable future, if the mammon worshippers back off a little.
"hewsp" I think my cat made almost exactly this noise when it coughed up a hairball last night.

Tim H. said...

Not sure how much they'll have to say, but I like it.

Acacia H. said...

@netsettler: This is a constant theme in the "Journal of Business Ethics" where one of the primary debates is whether it is unethical to have a company abide by the concept of the social responsibility of business. Yes, you heard me right: there are business ethics philosophers who believe it is contrary to business ethics to utilize concepts of social responsibility, working with employees, and generally doing anything that doesn't maximize the profits of the company and the primary shareholders.

What's worse is that this is still a prevailing belief in the business community... and over the short run they are being proven right. It is more profitable to be a complete bastard, smash your competition into the ground, force government to ensure you have a monopoly, and then screw over your customers under government-mandated protection, than to operate under a condition of fair competition, doing right for employees, and upholding the tenants of social responsibility.

Of course, you could say that events in Egypt show that ultimately this philosophy fails. After all, if a government falls because IT failed to uphold the tenants of social responsibility, then ultimately can any corporation stand up against a horde of former consumers who demand justice?

Rob H.

LarryHart said...

Dr Brin on Asimov:

Asimov needed to concoct inherent limitations to robot abilities, or they would simply become gods. They were ALREADY gods, de facto. Some of Isaac's solutions were bizarre and half-baked, like psychic powers. He never really trusted his first and most ingenious solution...

...which was typified by his first invention. The First Foundation established on Terminus to use HUMAN SOCIETY as the great engine of wisdom-generation. I find it weird that it was his first notion... and he spent the next 40 years writhing and twisting to find new ways not to trust it.

To me first reading it in 1980, the "Foundation" trilogy was perfect as is. Oh, I wanted to read MORE Foundation stories, but I wanted more of the same TYPE of story.

Alas for me, when Asimov actually wrote more Foundation books beginning in the 1980s, he seemed obsessed with demostrating that the books existed in the future of the same timeline that his Robot stories took place in.

Sure, he did what he could to reconcile the difficulties, as you (Dr Brin) did in your later novel. And you all did the best you could with what you had to work with.

But that's just it. You (inlcuding Asimov himself) were stuck doing the same sort of "explanations" that George Lucas found necessary to shoehorn the original "Star Wars" into the Star Wars Universe. With a LOT of willful suspension of disbelief on the part of the viewer, it can be done, but not satisfyingly.

And it's all so unnecessary. The problems stem from the essential differences between the Robot and Foundation series(es). Had they been left separate--as they had been for thirty years--there could have been plenty of room for later Foundation stories more in line with the original trilogy. More Robot stories too, for that matter. Just not the SAME stories.

LarryHart said...


Physics already has some fairly bizarre notions that have been proven - such as how observing an effect will influence the effect in question.

May I ask an amateur question here, keeping in mind that I have no physics training beyond 100-level college courses.

It always seemed to me that "observation influencing the effect" is a slight mischaracterization. What influences the effect is the observabILITY of the particles in question, not whether they were IN FACT observed. For instance, the position and/or velocity of a tiny particle is altered by hitting it with a photon, which is a necessary condition for observing the particle. Therefore, the particle's characteristic is changed by the fact of "observing" it. But not really by "observing" it in the sense that someone has to be watching.

The particle's motion is changed by the act of MAKING it visible, not by whether or not someone is actually looking at it.

My question: Am I missing something fundamental here, or do I have a point?

LarryHart said...

Robert again:

If the Mule didn't have psychic abilities, if he had just happened to have a technology breakthrough that he used to conquer a region of space, then the mathematics behind psychohistory would have predicted such an occurrence and found a way around it.

That's exactly what first bothered me about Asimov's new Foundation books in the 1980s. In the earlier noves, The Mule was a completely unforseeable phenomenon which was able to knock psychohistorical prediction askew. It made no sense (to me) to retcon him into a mere offshoot of a colony of agents who were working FOR the cause of psychohistory itself. As the latter, his effects on history should have been EASILY predicatable AND easily dealt with.

One could argue that he WAS dealt with, and that works ok as an explanation, but again, not a story-wise satisfying explanation. The suspense that drives the final half of the original trilogy is all meaningless if one goes that route.

LarryHart said...

still more Robert:

what if the Mule had used scientific methods in his conquest of regions of space... and methods that the Foundation would have used to get around a "rogue" scientific discovery allowing for a warlord gaining military ascendancy in the region.

The Mule was a problem not just because he was ascendant in a reason. The essential threat of the Mule was that he was a megalomaniac who had no reason to care about humanity. Ayn Rand with superpowers.

His physical deformity making him an outsider and a true "mule" (trying not to spoil too much here) were at least as important as his super power.

Did I just argue against my earlier post there? Not sure.

LarryHart said...


not only are so-called legal people (that is, corporations) not bound by the Three Laws, but in fact they are required by the rules of law in modern business to act like sociopaths. That is, they are not allowed to be what most people would call ethical, caring about entities other than themselves, even if they want to.

Which is EXACTLY why the Citizens United decision is insane. It makes no sense at all to extend rights of human beings--who are expected to act as family members and good citizens as well as consumers and laborers--to corporations who are FORBIDDEN from doing so. Not to mention that human beings have emotions and limited lifespans, whereas corporations have neither.

One might as well argue that fictional characters deserve human rights. Which in a way is exactly what the USSC DID argue.

LarryHart said...

...and that blog post about the Three Laws of Robotics and corporations made me think...

A corporation should be required to FOLLOW the Three Laws (incorporated into its charter) in order to QUALIFY for rights equal to human beings. Otherwise, they deserve only the rights of babies and the mentally incompetent. If that.

Rob said...

@Corey - If you're not actually adding transistors, there should come a quick point of no benefit to having two cores execute two piece of a problem half as fast, vs one core exectuting one piece at a time twice as fast. [...]

If you're not actually adding transistors, there should come a quick point of no benefit to having two cores execute two piece of a problem half as fast, vs one core exectuting one piece at a time twice as fast.

We usually have to take clock speed as a constant when developing an algorithm. What you're doing here, though, is restating Amdahl's Law while varying clock speed.

And what you get when you do that is a "yes, kinda" answer to your question. *IF* you can burst your clock speed to double for your computation, then the single-core computation will always complete in less time than a dual-core equivalent at the same speed. If you can't burst your clock speed, then the dual core algorithm will complete in about 60% of the time, roughly. The overhead introduced is additive; the add more execution threads to the problem, and eventually, your algorithm executes in the same amount of time as a single execution thread would have.

Microsoft uses a test called a "hill-climbing algorithm" to try and identify the sweet-spot for execution threads, in its parallelization library. Dr. Sean Luke at George Mason University has a really clear description of that on his web site somewhere. "Really clear" if you're a junior or senior in a CS program, that is.

The fact that Microsoft (and Intel, and a couple others) abstracts away a lot of the pain related to this sort of software development is nice, but they just released their stuff two years or so ago on average, and it has yet to penetrate the rank-and-file professionals.

Rob said...

@Tony Fisk -- The matrix approach is found in modern GPU's, in the SIMD-type circuitry in CPU's and a couple other places. So you see it in 3D games and Excel graphs. It comprises about a seventh of the approaches computer scientists have identified. It's low-hanging fruit.

@sjmalarky -- There are always portions of the computation which cannot be parallelized. Thus we have to consider the problems described by Amdahls Law every time we develop a threaded algorithm. I'm actually dealing with that today with some of my code, where my parallelized portions are humming along quite nicely but Microsoft's WPF engine, which uses the output, is bluntly serial, and is taking a boatload of time to process.

gmknobl said...

Dr. Brin,

With regards to what you do on the 'net being there after you're dead, check out Brian Brushwood's project (with the help of Patrick Delahanty) named You can google it to find more info.

I think you'll find it interesting thought it's not quite what you were suggesting, it's the first of it's kind I think.

Sean Strange said...

Corey, there was a time not long ago when I would have agreed with your criticism of growth economics, but on deeper reflection I don’t think it passes the reality test. Since the first microbes started converting the dead rock of our planet into biomass, life has been expanding into every available niche, or dying trying. To say that we must now say “no” to this natural imperative is to say that we are done growing and should start dying. I doubt this is in our power, but even if it were it would be an invitation to stagnation and suicide (e.g. the Voluntary Extinction Movement, a philosophical dead end if there ever was one!).

On the issue of limits to growth on a finite planet, the obviously fallacy is that we aren’t limited to the resources of this planet! The sun produces more than 20 *trillion* times our current global power usage, and the non-solar mass of our solar system is 500 times the mass of Earth, so clearly there is a lot of room for growth if we can become a space-faring civilization.

Having said that, if we project exponential growth far into the future, we will exhaust all the energy of our sun, then our galaxy and then the accessible universe in a surprisingly short time (less than 10k years at 2% annual growth), so I agree that at some point exponential growth will probably need to stop (unless we find a way to get to other universes, in which case we can party on!). But from a cosmic perspective, we are so far from that point that it’s not even worth worrying about, and it would be absurdly premature and short-sighted to give up on our expansion and limit ourselves to this planet forever. Given the hostile universe in which we live, this is nothing but cosmic suicide (see the dinosaurs)!

“There is no way back into the past; the choice, as Wells once said, is the universe—or nothing. Though men and civilizations may yearn for rest, for the dream of the lotus-eaters, that is a desire that merges imperceptibly into death. The challenge of the great spaces between the worlds is a stupendous one; but if we fail to meet it, the story of our race will be drawing to its close. Humanity will have turned its back upon the still untrodden heights and will be descending again the long slope that stretches, across a thousand million years of time, down to the shores of the primeval sea.” --Arthur C. Clarke

rewinn said...

Y'all have well stated the problem with the fiduciary duty of corporations to be sociopathic with respect to humanity.

So-called business "ethics" differ from ethics of human behavior in that the latter refer to an objective phenomenon - humanity - whereas the former refer to completely artificial entities. Thus we can meaningfully discuss what innate human morality may or may not be ... customarily it is founded on notions of community or something sociobiology-like ... it makes no sense to do the same with respect to creations of law, because you can always simply change the law with a speed and ease that you can't do with human nature. Why would a corporation have or have not a duty to do thus-and-so? Because it's written into corporate law! Thus a corporation may have an ethical duty to give free lemonade on Tuesdays, if that is written into its home state's legislation, or it may have an ethical duty to be a total rat b@st@rd and suck the life out of humanity, if that is written into legislation (typically it's not expressed so clearly, but the life-sucking-rat-b@st@rd element is inferred).

Another way to look at it is that corporations are in revolt against humanity. If a corporation were attached to a piece of soil and waved a flag, we'd have no trouble recognizing this and dealing with the fact that just because it has a lot of human beings running around executing its will doesn't mean that it has the same rights as a human being. We would also have no problem recognizing the essentially totalitarian nature of corporations (or at best oligarchic) and acting accordingly. And if corporations had lands and flags, we would have no problem ignoring its internal ethical obligations in putting them back under our control asap.

I don't see how humanity wins this one, but I'm willing to be surprised!

David Brin said...

You all are being very active! (argh)

Fun article re dolphins and I know some of the researchers cited. Still, it is based on the premise that dolphins are fully “intelligent.” Work such as this is terrific and wholly worthwhile. But it always concludes that these bright animals are “close but no cigar.”

We will face the uplift question, regarding dolphins. It will be a tough call.

Robert, even by pure darwinian logic of survival and paying everything to stockholders, the present situation re corporate governance is insane. The CEO caste is supposed to be brutally competitive with each other, and supervised by directors who fiercely control company officers. This caste is now, instead, indisputably conspiratorial, inbred, inept, delusional and corrupt.... and utterly parasitical, acting relentlessly against the stockholders’ best interests, while using stockholder interest as their ongoing excuse.

The “NBA rationalization” for titanic executive pay packages is a genuinely criminal scam. It is premised on the notion that good executive managerial skill is as rare as mutant-tall basketball players, and hence is immune to the very same market forces that the executives claim to worship!

If their religion - (market forces correct all distortions) - were true, then recent interstellar level pay scales would have attracted all the brainy people from other fields of human endeavor -- until those salaries self-corrected back to reasonable levels. The fact that such migration has not occurred and no correction has happened, is excused by claiming that there are only a couple of hundred mutant-level managerial geniuses, who are as many standard deviations above average as LeBron is in B-Ball.

Only... LeBron can point to actual metrics to support his claims of mutant superiority. The CEO caste can do no such thing. Compensation package arguments do not ever show anything like definitive cause and effect!

Moreover, the incestuous, circle-jerk nature of the oligarchy makes such claims ludicrous. Appointing each other onto each others’ boards, these small cabals of golf buddies make deals to jack up each others’ compensation, evade insider trading rules, and generally behave EXACTLY like the artistocrats whose trade limiting conspiracies ruined every other free market system.... till Adam Smith came along to denounce them.


I felt the same way about Asimov combining his 2 universes. He would up concluding (correctly) that the only way to do it is for robots to enslave humanity and keep us stupid for 25,000 years. But why? “For our own good?” In FOUNDATION’S TRIUMPH I show that Isaac hinted at “chaos”... a mental disease that explains... everything!

Larry you are right that PHYSICALLY the photons carrying light to the object and then your eye have a big part of the effect. But in Quantum mechanics, it is the math itself... collapsing the probability waves... by which the observer does the real magic. Read Greg Egan!

gmknobl sorry I don’t go to sites without at least a description

Tim H. said...

David, CEO compensation is as if the UAW had to negotiate with the Teamsters for new contracts. Yes, you described it perfectly, but not loudly enough.

Tony Fisk said...

Rob H said Of course, you could say that events in Egypt show that ultimately this philosophy fails. After all, if a government falls because IT failed to uphold the tenants of social responsibility, then ultimately can any corporation stand up against a horde of former consumers who demand justice?

Which is why I was pointing out the UKuncut (and now the USUncut) movements. Basic gripe is why should communities have to suffer the closure of libraries etc when snr management continue to rake it in.

There are enough narcissistic individuals in positions of power as it is without having corporations getting in on it. I suggest the solution is to categorise such entities as being temperamentally unsuited for certain roles.

Tim Bray (when he was still at Sun) ran a competition on a parallel processor to see how fast people could run a set task. (link

@cosmist: to suggest that the choice is either growth or death is a fallacy (indeed, to populate is, ultimately, to perish!). There are systems which have persisted in stasis for a long time (I recall reading that one of the longest surviving companies is a Dutch cafe!) Maintaining the balance is a tricky act of dynamic equilibrium. It is alluded to in Gleick's book on Chaos theory (life as an emergent system surfing the collapsing wave of entropy). David discusses it in Earth (competition vs cooperation). More recently, Flannery ('Here on Earth') has considered it from an Earth-systems perspective (he uses Gaia vs Medea metaphors, although he treats them as systems and roots his arguments firmly in observation. Well worth a read, even if you don't agree with him))

@Larry, I was going to say that you were on the right track wrt understanding uncertainty. David's response prompts me to ask 'what is an observer?'

Is it who sees the photon from an object? Is it the photon itself? The latter proposition would be intriguing since, travelling at light speed, relativity suggests time is frozen. How can any change be perceived in that state? How does it interact?

nonchiew: tobacco additive that discourages anti-social behaviour.

David Brin said...

Anyone seen this?

LarryHart said...

Tony Fisk:

@Larry, I was going to say that you were on the right track wrt understanding uncertainty. David's response prompts me to ask 'what is an observer?'

My beef with the (admittedly hypothetical) Schroedenger's Cat experiment is that the cat would be an observer.

LarryHart said...

Also...I'm fortunate enough to be an engi-nerd who married another engi-nerd.

We purposely didn't know the gender of our child ahead of time, and we often talked about how the baby's probablity wave would collapse at the birth. Of course, my sister-in-law, by insisting that it would be a girl and brooking no opposition, managed to influence the probabilty sufficiently, so a girl she was indeed.


Tony Fisk said...

My beef with the (admittedly hypothetical) Schroedenger's Cat experiment is that the cat would be an observer.

And no doubt madly scrabbling around for any bits of lead sheeting it could find to block those fatal emissions!

... did Sylvester and Tweetie ever try something like this?

Same thing with children (except we collapsed the wave when we couldn't think of any boys' names...)

BCRion said...

This graph says a lot about why the system is broken:

soc said...

Negative thoughts counteract the effect of painkillers

Corey said...


I enjoyed reading your analysis of the situation, but while I find your thinking solidly grounding, I find a few problems with how you approached things.

First, your explanation of evolution and ecology isn't too far off the mark, but the situation is more nuanced than "grow or die". Most species spend most of their existence doing neither. Species expand to take advantage of new resources and new ranges because it helps give higher chances of continued survival, though that really applies to groups of species, as changing to adapt to a new ecological niche usually results in one becoming a new species (a process known as adaptive radiation, something that usually takes place after ecological upheavals have wiped out resident species in those niches). Taking it as high as families, or perhaps orders, however, does certainly make this true. Miacids, for instance, no longer exist, but their success is attested to by their biological legacy: their single species or small group of species evolved into every member of the order, Carnivora. That single animal became dogs, weasels, bears, other felimforms, and other related species. It's a downright staggering diversity.

That said, growth is not indefinite. All species are constrained in their ability to grow, and it's that very fact, the fact that nature doesn't allow endless growth, that allows complex life to exist. If you're an ecological producer, like a plant or green algae, then your expansion and growth of numbers means that organisms evolve to take advantage of your biomass, so your growth is kept in check. Producers can't really cause harm unless they drive other producers to extinction (like invasive Kudzu, for example), but consumers can. If you're a primary consumer, however, and you grow to take advantage of producers, then secondary consumers evolve to consume you, so your numbers are kept in check. If you're a secondary consumer, then you're kept in check by tertiary consumers and available food. If you're a tertiary consumer, even one with no predators, then your numbers vary with your food supply, and so, you, too, are kept in check.

Species also can't expand indefinitely into alternative ranges or niches, because extant species already occupying them will already be vastly better suited to that ecological role. Felids may be excellent terrestrial hunters, I'd even say the best terrestrial hunters there are without having to think on it too much, but don't expect one to jump into the water and turn its limbs into fins anytime soon. Why? Because mammals already did that (yes, you heard right, they went water->land->water), and dolphins and whales and the like will already have countless millions of years of fine-tuned adaptation to that role that that ensures that some cat isn't going to learn to swim and hold its breath and out-compete them.

As I said, adaptive radiation usually only tends to take place after ecological upheavals have freed up ranges and niches by killing the extant species.

Corey said...

The second problem is that while your thinking isn't too far off the mark in terms of traditional ecology, I can tell you, as a biology studen who's foremost focuses are in ecology and evolution, that humanit is NOT a traditional ecological construct.

An acquintance of mine, one with a line of thinking very much like yours (something I appreciate in both cases), once equated humanity to fruit flies in a lab. He said the problem was that our "jar" was getting too small for us.

This, however, is my primary point. The problem isn't that our jar is too small, it's that after 220 million years of mammalian evolution, we STILL think like fruit flies. You're right, in time humanity will probably take to the stars, and in the short term, we might even find some *extremely limited* colonization capability here in our own solar system. That said, I would encourage you to consider time constraints here. Even assuming we could migrate more than a few million people off this planet being limited by subluminal propulsion technology, it will be decades, if not centuries, before that becomes realistic on that sort of scale. We're probably 30 years away from establishing even a few humans or a few dozen humans just on permanent bases on the Moon, or even temporary bases on Mars. FTL travel may not even be possible, and while our aversion to limits as a species gives me no doubt that any way to achieve it that might exist WILL be discovered by us if it's there to discover, even if it means we have to build a ship and give it FTL capabilities by propelling it directly with the sheer power of our own obstinance, this is something for which we have no timetable at all.

This isn't Mass Effect, and exponential growth isn't something we need to worry about curtailing in 10,000, or 1,000, or 100 years, but rather right now, because while we can dream of the possibility of human existence elsewhere, maybe, someday, these are problems that are cripplinging staring us in the face RIGHT NOW.

This is where the flaw in our thinking comes in. I'm not saying, nor would I say, that industrial capitalism is bad. It was something our society began adopting as a way to give more material wealth to people to make us a more powerful and successful species, and combined with 200 years of social evolution, I'd say we've been very good at putting that power to wise [and dare I say even noble?] use.

Corey said...

The problem I have isn't with our technological answers, the problem is that at some point, when the time came, we failed to ask ourselves the correct questions.

Being ever the fruit flies that we are, we saw that growth, as defined by 19th century industrialism, while it brought problems, brought, on the whole, the potential for greater quality of life. Where we failed in our thinking was that we began so strongly equating that growth and the overall goals of our species, that we began looking at growth as an ends instead of a means; we began looking to the growth itself, instead of the reasons we pursued it.

This was partially pushed, in America, by power-hungry business moguls who wanted their wealth and empires to become vast without limits, as a means of self-benefit, these men who's goals were so much antithesis to the welfare of our nation, that history remembers them by the name "robber barons".

If I had to credit one man with the foresight to begin seeing the flaws in humanity's vision for what they were, it would be Theodore Roosevelt. Roosevelt came as an 11th hour voice of reason in a time of real problems for the US. At the time of his taking office, it was often said that between Roosevelt and JP Morgan, it was difficult to know which was, in fact, the most powerful man in the nation. For our nations sake, it's fortunate that he won that battle, instuting the beginning of our nation's antitrust laws to ensure that society dictated the proceedings of business, and not the other way around.

To me, however, his true contribution was his realization of the far graver, and far less visible mistake our species was making. At the time, in 1884, he had just lost his first wife and mother, and left his daughter in the care of his sister in order to get his life back in order. Taking up residence in more remote regions of the US, and hunting, he noticed a stark contrast between the numbers of American bison compared to hunts earlier in his life. Whereas the animals were once so numerous, that herds stretched as far as the eye could see, he failed, time and time again, to find a single remaining one of the animals on repeated hunts. Decades and decades of this ( took its tole.

Roosevelt realized two things at that time, and in the following years that would take most of humanity half a century to realize themselves: first, that the planet's resources were not limited (not even for the 1.5 billion people of the time), and that secondly, this problem was far more than academic, and would come to define the future of humanity.

In his own words, "The conservation of natural resources is the fundemental problem. Unless we solve that problem, it will avail us little to solve all others". Nevertheless, he was laregly ignored. He assembled meetings of governors, and nother came of them. He attempted meetings of world leaders, and nobody came. It was something that the rest of us wouldn't realize until far more damage had been done, and far less time existed to solve the problem.

Even since his time, our thinking hasn't evolved. Academically, we know the questions we should be asking ourselves:

-How do we ensure the greatest quality of life for existing humans?
-How do we exist in a sustainable fashion?
-How do we protect the right of other forms of life, especially those with high order intelligence, to exist?
-How do we ensure the greatest possible legacy for future generations?

Instead of asking them however, we remain grounded in 19th century thinking. Instead of asking questions about the goals of our species, and whether or not they even fit in a paradigm of further growth, our great folly is that we just keep asking ourselves, over and over, "how do we grow bigger".

There's my rant for the week :)

Ilithi Dragon said...

Corey, only 3 posts? You're slipping. I was expecting at least four, if not five.

The kicker: I'm not joking (and if it were about AGCC, you'd be lucky if he stopped at five).

Ilithi Dragon said...

(And by stopping at 5, I mean his nightly internet reset hit while putting up the other 103 posts and all the data was lost...)

Tony Fisk said...

According to Flannery (whom I shall try to stop invoking), we're beginning to think/act more like ant colonies than fruit flies. (a prospect which will have most libertarians hot under the collar) Whether we can do so in good time remains to be seen.

Corey said...

@Ilithi Dragon

I'll take that challenge, and raise you one Star Trek phaser post!

(good luck digging through forums finding those fabled AGW posts; my secret has been made safe by time! :P )

Anonymous said...

There's Russian paleontologist Kirill Eskov (wikipedia) who in 1999 published a retelling of Lord of the Rings from Mordor's perspective.

The book was very well received in Eastern Europe, but various attempts to publish the book in English have been thwarted by the Tolkein estate. Just recently an authorized (by the Russian author, not the Tolkein heirs) English translation has been released for free on the internet. You can find the PDF at:

I've only started reading, but it appears quite good and I suspect will appeal to the crowd around here. A brief excerpt to give a taste:

This, then, was the yeast on which Barad-Dur rose six centuries ago, that amazing city of alchemists and poets, mechanics and astronomers, philosophers and physicians, the heart of the only civilization in Middle Earth to bet on rational knowledge and bravely pitch its barely adolescent technology against ancient magic.

Acacia H. said...

Scientists have linked extreme weather events to climate change/global warming. Of course, we can expect Republicans to claim the study is too limited and that we need to wait some 30+ years for extra evidence to prove if this is in fact valid research.

Personally, I have a better solution. Put before every single Republican politician a contract. In it, the contract states that the politician agrees that they can be sued by the American populace for all of their assets if they are in fact proven wrong about global warming and as a result people lose their homes or livelihoods.

I'm willing not a single politician would dare sign that piece of paper.

Rob H.

Tony Fisk said...

T'would need a counter-contract, allowing every 'alarmist' to be similarly sued for unnecessary changes in infrastructure if the seas don't rise by a certain time.

(Only trying to be fair here)

nonag: aka Shanks Pony

David Brin said...

I agree with both...

on this condition. That "alarmist" things that are TWODA are exempt.
Things We Ought to be Doing Anyway.

Paul said...

Define "proven wrong" in "if they are in fact proven wrong about global warming", such that it would stand up in court.

LarryHart (and Tony), re: Schroedenger's Cat,
"the cat would be an observer."

And the vial, and the radiation detector. And the box. And...

My understanding was that the Cat was a way for Herr Schroedenger to skeptically take the Copenhagen Interpretation to it's logical but seemingly irrational conclusion. Just how large a system can you entangle before encountering an "observer"?

And quantum physicists have been trying to answer that question since. They call such tests "Schroedenger's Kittens". Entangling larger and larger masses, working out when and how they collapse into a classical state.

One theory is that the "observer" is the quantum foam of space-time. The more foam a system is exposed to, the more classical it acts. On an atomic scale you can keep systems in a quantum state for long time, but once you hit macro objects your entanglement time drops to into micro or nano-seconds.

So, your cat'n'box system will "resolve" if you either a) leave it sitting for long enough, or b) entangle it with a larger mass (you) causing it to "resolve" nearly instantly. But in reality, it's probably already large enough for even the radiation detector to quickly "resolve" the quantum superposition.


(aletots: Now in Raspberry!)

Paul said...

Random thought re: Moore's Wall.

Assuming there isn't a way around the problem. (No parallelising million-core chip clouds.) So we reach an upper limit of computing power density and/or cost. And so 20 years later you're buying roughly the same speed computer at roughly the same price...

Will we start to get good at building electronics? And soon after, things that use them?

Right now, there's no point really optimising your software if it takes 3 years to double its speed on a given processor, when by then new processors are 4x as fast.

Likewise, there's no point building electronics to last, Moore's Law vs labour costs always means it's cheaper to replace them than fix them.

But once speed-vs-cost stabilises, will optimisation and maintainability again become selling-points?

Likewise, once computers (and other electronics, phones, cameras, etc) stabilise for decades at a time, will old hard-won knowledge regain its traditional value? "Granddad, how do you set the TiVo?"

Re: Solution for growth sans IT growth.

Biotech? Or will the stalling of IT also stall the advances in biology?

(elycable: The property of Apple products.)

Acacia H. said...

Looks like we've a category X solar flare incoming, on the heels of a category M flare. This one might even cause some damage. Or at the very least some pretty northern lights. ;)

Rob H.

Acacia H. said...

And it may be that the future of a cancer-free and diabetes-free humanity is... short. It seems that a group of people descended from Europeans in an Ecuadorean village who suffer from Laron syndrome or Laron-type dwarfism have a much lower instance of cancer and diabetes - and by reducing insulinlike growth factor, or IGF-1, in human cells, cancers can be prevented.

The largest cause of death among these individuals seems to be accidents and alcoholism, even among those villagers with dwarfism who are obese. And I have to be amused... one of the key fruits potentially to longer life is something Mother Nature introduced on her own. I suppose this is a low-lying fruit... but one that escaped notice for a while.

Rob H.

LarryHart said...

Tony Fisk:

Same thing with children (except we collapsed the wave when we couldn't think of any boys' names...)

Same with me! Our child arrived three weeks early, and in the mad scramble to the hospital, we looked at each other and said "Maybe we'd better decide on a name." Luckily, we agreed pretty readily on a girl's name, but if we had had a boy, he might STILL be "Hey you" nine years later.

LarryHart said...

Tony Fisk:

T'would need a counter-contract, allowing every 'alarmist' to be similarly sued for unnecessary changes in infrastructure if the seas don't rise by a certain time.

Dr Brin adds:

on this condition. That "alarmist" things that are TWODA are exempt.
Things We Ought to be Doing Anyway

I would further add a clause to exempt "alarmists" for things that don't happen because action WAS taken in time to prevent catastrophe.

In other words, deniers would be liable if we TOOK their advice and bad things happened. Right?

Under what conditions could alarmists be considered equivalently liable? If we took their advice and bad things happened anyway? I'd say no, because the "alarmist" side is claiming bad things WILL happen and that it might be too late to stop them. Perhaps they're liable if we DON'T take their advice and good things happen anyway? But in that case, exactly what are they liable FOR (since we wouldn't have acted on their advice)?

The denier argument against the alarmists would seem to be that we'd spend lots of money on remedies that aren't necessary. But how would that ever definitvely be proven?

Mike Frank said...

Regarding friction in a vacuum. I have always understood that a vacuum was an area in space devoid of particles (matter). But this does not preclude that forces (energy) can move through the vacuum. On the other hand (correct me if I'm wrong) matter and energy are two sides of the same coin (E=MC2). Thus, if energy is moving through an area of space it may not be a vacuum even if no matter is present. Do we need to redefine what a vacuum is? Or is it possible that no such thing as a real vacuum exists meaning the concept may either need to be redefined or eliminated?

Corey said...

On the subject of climate liability, even in principle, there's a couple of points I'd like to make (assuming we'd be talking about making both sides sign this hypothetical contract).

First, it would be kind of unfair to the right wing, in a silly sort of way, because anything that's done to curtail GHG emissions will ultimately end with clean energy, independence from Middle Eastern petroleum, and lower energy costs from more efficient technologies. As Paul Krugman has pointed out many times, even carbon taxes or cap and trade schemes would really just ultimately spur technological investment (at a time when our economy desperate needs it). China certainly doesn't disagree that this is a good things to do; they're kicking our rear ends in development of green technology and infrastructure, and are very confident that they will continue to do so, and that it will give them enormous leverage over us economically down the road.

In short, "combating" global warming basically just means doing all the things, energy-wise, that we should be doing ANYWAYS, which means the "alarmists" can't actually cause net economic harm, since they aren't proposing anything that shouldn't already be being done.

Now, it's not really unfair, because the GOP knows this, and skates around the issue with scare tactics and predictions of economic gloom and doom. That why they wouldn't sign. It's only unfair from a certain very silly perspective. :)

For climate scientists, however, it IS unfair, and here's why. Science doesn't deal in unequivocals. Proof simply doesn't exist in science; it's a math concept. All science deals with is probability.

What climate scientists are REALLY doing, isn't saying "bad change WILL happen", but rather saying "according to all evidence of which we are aware, there is a high probability that a certain range of changes will occur". Of course, an exact probability is impossible to determine, so you use broader terms like "almost certain" and "very likely". I believe the IPCC pegged that certainty at 90%-95%. The range of changes, of course, is, iirc, 2.5-4.5C of change from a doubling of CO2 over present levels (usually just called "somewhere around 3C" for simplicity). That includes all feedbacks that will enhance that warming; including water vapor increase, perhaps oceanic release of further CO2, reduction in albedo from the global loss of ice, etc; and not just the straight up emissivity change that will occur from that exact amount of CO2.

What the scientists are saying differs from what the right wing political activists are saying, because it's completely and verifiably true. According to our present body of evidence, there is so much probability that so much change will occur from so much increase in atmospheric GHG concentrations.

From that, the scientists give a simple risk assessment. They never we unequivocally HAVE to take any given action, lest we face any CERTAIN consequences, they just lay it out in probability.

Corey said...

Take this joint letter from the National Science Academies of 11 different nations, for instance ( Just look at the language. The very opening words under the first header are "There will always be uncertainty in understanding a system
as complex as the world’s climate"; it continues "However there is now strong evidence that significant global warming is
occurring... [and] It is likely that most of the warming in recent decades can be attributed
to human activities... The scientific understanding of climate change is now sufficiently clear to justify nations taking prompt action... a lack of full scientific certainty
about some aspects of climate change is not a reason for
delaying an immediate response that will, at a reasonable
cost, prevent dangerous anthropogenic interference with
the climate system."

Notice all the admitted uncertainty, weighed into a risk analysis? This is what right wingers will NEVER, EVER give you.

Instead, they do either one of three things. They claim, unequivocally and without evidence, that global warming MUST be a hoax and global conspiracy; they refuse any form of risk analysis and simply say, based on no logic of any sort, that "climate science isn't completely certain, therefore we must take no action, of any kind"; or they trot out a big pile of FUD to try to cast false doubt on climate science, pretending they know more about atmospheric physics with their law and polysci degrees (or a degree in Classics, in the case of "Lord" Monckton) than countless thousands of atmospheric physicists.

ell said...

Ian -- If we can use water to cool our home computers, we can use our home computers to heat our water. If we're not using our computers enough, we can use our backup gas-powered water heaters. If we're using our computers too much, we can have the excess hot water run a turbine to generate electricity to power the computer.

We may have to think in terms of integrating a computer with its environment.

rewinn said...

Two terms new to me that may help understand AGW denial, along with birtherism and the hatred of the New York Yankees:

Shibboleth: "an affirmation that marks the speaker as a member of their community or tribe"

Agnotology: the study of culturally-induced ignorance or doubt.

John Quiggin writes of the application of these terms to understanding contemprorary "conservativism" at, which includes some useful comments (along with ritual attempts at equivalency, e.g. "High-Speed Rail is a Liberal Shibboleth").

JuhnDonn said...

One other computing problem I've learned of (high performance computing support newbie) is data. Sandia has some large cluster super computers and some of the data sets they're coming up with are approaching petabyte size. Just moving all this data around, from processes to process is one problem and then there's the whole 'what do you do with all this data' once you've run things and the machine needs to be freed up for more uses.

As for collapsing child waves, why does it appear to be easier to name girls than boys? We had a lot of cool girl names (Athena, Penelope, Klytomnestra (wife didn't like this one), etc.) but for boys, I could only come up with Odysseus and Aias the Smaller. Wife definitely didn't like these.

Luckily, we had one girl and wife came up with another name out of the blue.

Acacia H. said...

I came up with a name for any son I eventually have (depending on any future wife's input, naturally): Philip Lovecraft. =^-^=

Rob H.

JuhnDonn said...

The Agnotology and the Ecstasy?

Krugman: My view is that Quiggin is right as far as right-wing politicians are concerned: for the most part they know that Obama was born here, that he isn’t a socialist,that there are no death panels, and so on, but feel compelled to pretend to be crazy as a career move. But I think Chait has it right on the broader movement.

I mean, I see it all the time on economic statistics: point out that inflation remains fairly low, that the Fed isn’t really printing money, whatever, and you get accusations that the data are being falsified, that you yourself are cherry-picking by using the same measures you’ve always used, whatever. There really is epistemic closure: if the facts don’t support certain prejudices, that’s because They are hiding the truth, which we true believers know.

David Brin said...

Rewinn thanks for that!

And Gilmoure for providing Krugman's riff.

is seriously thoughtful and filled with important insights.

David Brin said...

Yipe! Buy Motorola stock.

Corey said...

Gilmoure, thanks for that.

Short a read as it was, I definite think Krugman's head is in the right place, as per usual.

He actually came to my college just a short time ago and gave a talk on the causes of the 2008 economic collapse; it was quite a good presentation (and probably the best $3 I've spent in awhile).

David Brin said...

I have a copy of FOUNDATION'S TRIUMPH packaged and ready to send to Krugman. But I could never make it past his layers to anyone who understood their boss's love of science fiction... or that his career and their jobs were all rooted in Asimov.

Lesson. Hire people who understand you.

Acacia H. said...

Why not label it a birthday gift and send it to him that way? It's rather in poor taste to throw out a birthday gift without even letting the recipient know he received it (given that it's not a bomb or something unpleasant).

Rob H.

LarryHart said...

Dr Brin said:

The “NBA rationalization” for titanic executive pay packages is a genuinely criminal scam. It is premised on the notion that good executive managerial skill is as rare as mutant-tall basketball players, and hence is immune to the very same market forces that the executives claim to worship!

I've wondered how long it would take for companies to start outsourcing the CEO position. Seems like some guy in India could easily do the job for $100k or so.

On the other hand, radio host Thom Hartmann has a theory that the CEO skill that is so rare as to be worth billions is the ability to function as a sociopath.

gmknobl said...

Dr. Brin,

I wouldn't steer you wrong on a web site. There's not much to right now.

It is a place for people to sign up to get twitter messages from Brian Brushwood AFTER he's died. Apparently there will be more to the site and it might expand. It's not really artificial intelligence but it's a unique take, if a bit morbid, on how he can keep going with a "virtual ghost" as he puts it, by re-sending twitter messages he's accumulated from now until the day he dies. He's quite young.

The reason I suggested googling it was to read what he's said about it and what others say about it. What struck me about it is this is a unique idea, not necessarily his, but the first time anyone's put such an idea into practice that I know of, with this technology.

Here's a quote: "The service will begin to update Brian’s social media accounts like Twitter and Facebook with the past year’s updates. The accounts will update on the same date and time as the original message posted in the final year of his life. Every year from then on will see Brian’s final posts pop up on the same date as the original post.

Brian also hopes to add more functionality down the road, including selecting certain favorite messages to post at random times after his death. In such a way he’ll continue to make people chuckle or think or perhaps even weep a little. He’ll be a virtual ghost."

I simply thought you would find it interesting.

David Brin said...

See an old Max Headroom episode in which the dead seemed to be able to interact with loved ones via computerized images.

Acacia H. said...

Couple quick things: First, a brief glimpse of light amidst the bloody crackdowns on protesters in the Middle East with a peaceful anti-corruption protest in Omar (which I suppose is peaceful primarily because the protests are against corruption, instead of for the overthrow of the government - still, other Middle Eastern nations should take note).

Second, scientists created a laser-absorbing device for use in future electronics which will likely use photons instead of electrons in transmitting data. Sorry, it won't work against Star Wars lasers. ;)

Rob H.

SteveO said...

Dr. Brin, I've mentioned it before...

For these little videos you make, consider 1) getting a lavaliere microphone and 2) using software to reduce the ambient noise.

I recorded a class that I sell online (90+ hours!) and the difference in sound quality between your video above and mine cost $20 for the mic (the software I needed anyway to record the course had noise reduction built in) and is HUGE in terms of the message quality it conveys to listeners.

They WILL form a subjective opinion based on that sound quality.

David Brin said...

Steve just bought the new camera & mike.

very funny!

Alex Tolley said...

@Ian, re: Nautilus

A colleague and I have a very similar idea which we called the spacecoach. A blog on the concept is here:

We also published a paper on JBIS
"Reference Design for a Simple, Durable and Refuelable Interplanetary Spacecraft" McConnell B, Tolley A, v63 No3 p108-119.

I applaud Nasa trying for alternatives to the standard spacecraft model, because surely another nation will do something like this if the costs are so favorable.

David Brin said...