Tuesday, February 23, 2016

Regulated competition is the wellspring of our revolution

The hot new Evonomics site offers some of the best writing about fresh  economics perspectives around. I was one of their first writers and now they have published another piece on "The Fairness Divide" making a clear distinction between equality-of-opportunity vs. equality-of- outcomes.  I think it will set some familiar perplexities in a much clearer light.  

Along related lines, Lawrence Lessig has joined others in questioning one of our laziest assumptions: that capitalism is the same thing as corporate oligarchy, and that the secret to a healthy capitalism is zero regulation.  

Anyone who actually reads Adam Smith - or who knows a thing about the last 6000 years - knows that oligarchy is the worst enemy of flat-open-fair-competitive and creative market enterprise. 

Here's a passage from Lessig's recent review: 
Theorists and principled souls on the Right are free-market advocates. They are convinced by Hayek and his followers that markets aggregate the will of the public better than governments do. This doesn’t mean that governments are unnecessary. 

"As Rajan and Zingales put it in their very strong pro-free-market book, Saving Capitalism from the Capitalists, 'Markets cannot flourish without the very visible hand of the government, which is needed to set up and maintain the infrastructure that enables participants to trade freely and with confidence.' 

"But it does mean that a society should try to protect free markets, within that essential infrastructure, and ensure that those who would achieve their wealth by corrupting free markets don’t.”

"Rajan and Zingales further describe:

“Capitalism’s biggest political enemies are not the firebrand trade unionists spewing vitriol against the system but the executives in pin-striped suits extolling the virtues of competitive markets with every breath while attempting to extinguish them with every action.”

== Must markets be 'blind'? ==

Way back in the last century, I was pointing out that those proclaiming “Faith in Blind Markets” — or FIBM — mostly ignore those 60 centuries, when lack of market regulation simply meant “those who have, rule.” Across that era, laissez faire inevitably led to feudalism and stunningly stupid governance. The last 200 years have been an exception to that brutally nescient and incompetent span. This was Adam Smith’s foremost complaint.

Does this validate the opponents of FIBM? Those who proclaim Guided Allocation of Resources, or GAR? Surely the examples of Leninism, Maoism and Japan and China show that central control has severe limitations. Without any doubt, the FIBM guys have a point — that there’s such a thing as too much regulation. (Ironically, which U.S. political party actually de-regulates obsolete agencies and loosens regulation, as often as it tightens it? Democrats, by far.)

I go into the tradeoffs of GAR and FIBM elsewhere.  But the outlines are clear.  Both cults want control and allocation by elites. The FIBM crowd (who call themselves “libertarians” but in fact are not) differ only in which elite they would make all-powerful allocators — not bureaucrats, answerable to an electorate, but a secretively-incestuous CEO caste of 5,000 golf buddies.  

That’s not flat-open-fair-creative market competition, and it certainly isn't Hayek. That is hypocrisy. It’s the tired old way: feudalism.

But read the Lessig article.  He's an economist and has lately earned some real cred from us.

== More on Hayek ==

Others are weighing in on Hayek, and the rampant misinterpretation that he favored zero regulation. As economist David Sloan Wilson put it:  "Hayek had two way-ahead-of-his-time insights. First, that economic systems have a distributed intelligence that cannot be located in any individual. Second, that this intelligence evolved by cultural group selection. Contemporary science — complex systems and multilevel evolution — validate those claims. But Hayek fans are mistaken to believe that his insights mean markets should be unregulated."

Or as Evonomics pundit Jag Bhalla says: "Hayek’s right that no “central planner” can know what’s distributed among people in markets. But computer scientists have studied distributed processing’s limits. Many tasks can’t be efficiently distributed. Most still need central coordination. Aren’t market computations similarly limited?" ... and "Effective market regulation should heed biology’s regulatory lessons. Economies, like complex organisms, need distributed reflexes and a central nervous system. They need more than one price-like signal to prioritize and regulate for the whole, and to manage systemic risks. That doesn’t happen automatically."

== Libertarianism and conservatism, redux ==


Let's look at this same issue from another angle.

The real problem with today's versions of libertarianism and conservatism isn't "selfishness" per se. As Adam Smith showed. competitiveness is one wellspring of human creativity and leftists are fools to deny it. 


No, the problem is that conservatives and libertarians almost never mention the word "competition," anymore.  Because they know people sense a contradiction with the modern religion that has taken over libertarianism. Idolatry of unlimited personal property. 

At best, these two concepts - competition and propertarianism - are tense partners, with some genuine property rights necessary, in order to foster competitive drive. But they can become often diametric opposites, even enemies.

Yes, property rights are essential, but they become toxic when overly concentrated. (Just like any other good thing, e.g. water, oxygen and food.) Adam Smith knew this. To him, the true enemy of market enterprise  - across 99% of societies - was feudal owner aristocracy.  And it is true today. 

Let's stick this point: Idolatry of unlimited personal property is the same thing as declaring hatred of flat-open-fair competition.

See my classic essay on this, appealing to all -- especially libertarians -- to get over their voluptuously silly Ayn Rand solipsism kick and actually read Smith, a philosopher who understood so much more than slimplistic left or right credit him with, and who changed the world.

See this recent essay about Ayn Rand's cult of selfishness and the real world cases where it has been put into practice... Sears/KMart and Honduras, both of which were suddenly converted to Randian principles of cut-throat internal competition.

Both are now teetering on bankruptcy.

== A useful innovation that can sour ==

Let's try this from yet another perspective: George Friedman, founder of Stratfor the strategic analysis firm and now working with economist John Mauldin, discusses the modern, limited liability corporation:

The very idea of a corporation is a political idea. That someone should be able to own part of a company but not be liable for all its debts is a very modern idea. It's also a very radical idea. Many people, including Adam Smith, did not trust the corporation. Smith argued that unless you were an owner of a corporation, you were not committed to its interests. ... It is the state setting liability. 

The notion that there can be limited liability doesn’t flow from the free market. It flows from the state, which says you can have this kind of corporation”

To be clear, Smith was not all-knowing.  The limited liability corporation has definitely had its uses and allowed more bold risk-taking in pursuit of economic dynamism.  But the moral hazards mount up over time.  Not only should LLCs be fundamentally limited to prevent monopoly and conniving duopoly etc, but there are good arguments for assigning them lifespans, so they will not become immortal and toxic.

== A plethora of angles on a problem ==

To be clear, something like a modern political economic system is a lot like the proverbial elephant, being groped by blind pundits, each proclaiming a single, linear metaphor to be THE thing itself. In fact, these perspectives -- like the hoary "left-right axis" - are only useful to the degree that users bear in mind: the map is not the territory.  And our metaphors can lobotomize.

So let's restate "left" and "right" not in obsolete terms from the French Revolution.  Instead, I think conservatism vs progressivism is all about the process of "horizon expansion" that I talk about here and here. Wherein the circle of inclusion in society keeps being pushed outward, a process that gained momentum in our Great Experiment gradually, for the last 250 years.

A process that the left has made their core religion! So much so that they despise and denounce anyone who disagrees even slightly about the pace of tolerance/inclusion expansion and openly question whether old loyalties are still pertinent.

The right, in turn, despises those who push hard on inclusion-expansion and hates to be nagged to do it.  They like their old loyalties.

LIBERALS are a third type, totally different than leftists. They tend to like the general process of inclusion expansion ... but they also like their old loyalties.  They are the only ones conceiving this as a positive sum, win-win process. Again, liberals are neither lefties nor righties. They want new kinds of citizens!  But they also don't mind keeping some older ways around.

You see the same thing when it comes to the concept underlying our great competitive ARENAS... markets, democracy, science, courts and sports ...All five innovative systems achieve positive sum cornucopias of output because they nurse vigorous competition... but regulated in order to minimize cheating and maximize opportunities for creative rivalry.

Leftists despise the word "competition" ignoring (1) that is is the source of fecund wealth we use then to help people and expand inclusion! Moreover - oh the irony - (2) they they are themselves being very very competitive!

Rightists are worse!  They claim to love the word "competition" but hate REGULATION... without which competitive processes are always always always and always ruined by cheaters.  (In fact, enabling cheaters is now the main purpose of the Republican Party.)

Again, liberals are the only ones who see no dichotomy.  Who see the combined word "regulated-competition" as the wellspring of our revolution and bold new way of doing things.

Which brings us full circle.  Sure, regulations - even well-meaning ones - can stifle enterprise. (And dems are better at eliminating those.) But without a regulated marketplace we fall back upon 6000 years of cheating - and FIBM soon becomes just another excuse for GAR.

It's complicated, and not very satisfying to those who want simple prescriptions.  Rather, our role as adults is to accept that it is complicated.  To embrace all this complexity! To keep fine-tuning a role for regulation in enhancing infrastructure and science, education, health etc -- things that increase the overall number of skilled and confident competitors!  But also to back away from those well-meaning regulations that try to impose nit-pickery outcomes.

Militantly moderate, ferociously reasonable, courageously contingent... it is the liberals who seem less passionate, but who have the closest thing to an adult perspective. One that might bring us to even greater heights.




106 comments:

Anonymous said...

I have long swallowed the Kurzweilian Koolaid and think that the Singularity is truly Nigh. For me the idea of economics is temporal; what will work best NOW to build the best Utopia for the many? I do not think that cut throat vulture capitalism or centrally controlled socialism will do the trick. Clearly the best solution has been a hybrid system of Capitalism creating wealth, Socialism spreading that wealth, and Democracy attempting to make the who process as fair as possible. But we are now entering the final stages; automation will make human labor (mental and physical) virtually worthless within 30 years. Unless you can build and program a robot the only resource you will have is your vote. After having a brief flirtation with libertarianism in college the entire economy collapsed (2008) and it became very clear that anarchy was a bad idea. As I became more politically educated I noticed that much of "Libertarianism" had aligned with the Conservative Movement and had been thoroughly corrupted by the bigotry of the Right. As a white, heterosexual male it has been clear since I was a child that those labels were used against others to the detriment of all of us. I get very angry when I see those claiming to value "Freedom" devising political and economic doctrines to help their own kind. At this point I see that the "Enemy" is too greedy and too slow to really handle the extreme changes that are right around the corner. That makes me cautiously optimistic, but I know there are many would-be Adolph Hitler's and Khan Noonien Singh's out there ready to take everything. The moment immortality is perfected in the human body, there will surly be "Captains of Industry" willing to knock the whole world back to the Bronze Age so they can be Gods among men. I will fight like a honeybadger to keep that from happening.

-AtomicZeppelinMan

David Brin said...

AZM you will like Robin Hanson's new book: The Age of Em.

Treebeard said...

I think it will be the “progressive” oligarchs in the tech industry, raised on science fictional ideas of “men becoming gods” and little conservative thought or religion, who really embrace the radically unequal possibilities of technology. So talking about old school oligarchs may be missing the point; the new feudalism will be technology-driven, by people who imagine themselves as the new Supermen and Masters of the Universe, and are working without any traditional restraints to engineer themselves to be such.

David Brin said...

Bah. The new guys have egalitarian reflexes. They are grateful to a civilization that let them rise from the middle class while working side by side with lots of unafraid and empowered middle class engineers.

Above all they have the basic trait of satiability -- the strongest correlate with sanity - after their 1st few billions they shift ambitions. More money is good! But what you use it to DO matters more. So many of these guys invest in rockets and solar and cars and things with 20 or even THIRTY year return on investment (ROI.)

Those are all traits of zillionaires who "get" what's special about an open-flat-fair society. In contrast, the Kochs etc don't give a rat's patoot about the tech or the sense of wonder. If they can buy or suborn it and then keep it to themselves, they will.

Should tech be watched, openly and held accountable? Sure. That is the biggest reason I wrote The Transparent Society .

Treebeard said...

LOL @ ZeppelinMan. “Utopia” means “no place”; it's not a place you build. I wouldn't hold your breath waiting for the Singularity to rapture you or for “immortality to be perfected” either. Maybe you're too young to realize it, but we've been hearing this stuff for a LONG time. Get back to us in a few decades and let us know how your immortality perfection has worked out. And anyway, if AI/the Singularity comes, what will you need democracy for? What will you need humanity for, for that matter?

Anonymous said...

Traditional restraints!? That is plainly stupid as it ignores the history of religious/conservative traditionalism applying little to no restraint on themselves. Feudalism requires scarcity and slavery, ideas that lose a bit of meaning when the machines replace human labor. Now I am not so much a techno-utopian that I think there will not be a new crop would-be oligarchs in every generation, but as we get more technologically and culturally sophisticated (especially by ignoring the idiocy of Bronze Age superstitions) we become less susceptible those "old tricks". The trick is to keep on our toes and not listen to Chicken Little's like you.
-AZM

donzelion said...

@David - "Anyone who actually reads Adam Smith - or who knows a thing about the last 6000 years - knows that oligarchy is the worst enemy of flat-open-fair-competitive and creative market enterprise. "

They ought to include "Moral Sentiments" along with "Wealth of Nations." Moral sentiments offers guidance that constructs the reasoning for an "INVISIBLE BAND" (as posited in that excellent Sears article) - a precondition for a functioning Invisible HAND. We should all aspire to be "great ancestors." When we do, our competition offers far better hope than a 'feudal/oligarchy' ever could.

The "right" today sees a battle between "socialists v. capitalists." The radical left sees a battle between "fascists v. capitalists." A liberal should focus upon a different struggle - 'feudal/oligarchs v. capitalists.' Such a focus can unite libertarianism AND progressives. Such a struggle has always been the real struggle confronting democracy.

donzelion said...

@Treebeard - the "progressive oligarchs" in the tech industry have a more libertarian orientation, BUT inclusive instincts (sometimes held in good faith, sometimes for purely market reasons - 'exclusivity' impedes growth).

Oligarchs are seldom "old-school" - they update their rhetoric to steal from the Bible when helpful, or from Darwin when helpful, and these days, from Hayek. They will steal and distort any voice they can to more cost-effectively justify amassing wealth (in the form of rents and bequests, rather than creativity).

Gates, Jobs, Page, Zuckerberg, and (Sergey) Brin never saw themselves as "masters" - they saw themselves as tough, creative competitors trying to realize visions. Facebook/Google/Apple are fighting (and buying companies, launching products, etc.) to explore the profits possible by offering something new. Real-estate or supermarket barons live to drive rivals into crisis through sharp dealings, then buy their assets cheap, then flip them.

Try focusing on how they respond to insightful critique to see the difference. A technocrat billionaire, confronted with reasoned criticism, usually takes action where possible to respond. A rent-based billionaire, by contrast, finances FoxNews and builds a platform to attack any 'threat' to their capacity to amass as much rent as possible.

donzelion said...

@AZM - a quibble, since I think we see eye-to-eye on so much, but I'd be careful with "the history of religious/conservative traditionalism applying little to no restraint on themselves." One of the powers wielded by the Catholic Church (the original corporation, and source of corporate theory in the West) was to mediate the tyrannical instincts of oligarchs. They're sometimes (often) allies in advancing a 'domination' agenda, but they can also be adversaries. Even if some notions trace to the Bronze Age, some others are remarkably modern and creative.

To protect against feudalism, a liberal MUST be willing to unite with the Martin Luther King Jr's when possible. That cause is ill-served by denouncing them as 'superstitious relics.' The tyrants will always riddle you with agents-provocateurs, to scream about that 'breach' and block unity in their enemies.

Unknown said...

@DavidBrin

Do you think something CLOSE to "equality of outcome" can eventually be achieved by advancing to a post-scarcity economy? I think the technological advancements created by an "open" capitalist system is actually the best way to move towards towards post-scarcity and thus provide both more freedom and equality, so there is a "leftist" argument for capitalism. You could say I have the mind of a liberal but the heart of a leftist.

Or do you think post-scarcity is a pipe dream (by post-scarcity I mean basic necessities and many material luxuries are provided practically for free. Creative works, such as writings or some services only humans can provide, will always be limited I suppose, which is a good thing since it gives people something to do)?

Anonymous said...

@donzelion
Yeah, your criticism is legit. But many think MLK was no more a Christian than I am and the guy he was named for is a poster boy for all sorts of nastiness, like antisemitism. I know many very liberal Christians and I would never even think of mentioning how much of their Biblical history is bunk. Sometimes I suspect that somewhere deep down they do not really believe any of the magic stories. At least that is my theory on why Mormons are so gosh darn polite.
-AZM

donzelion said...
This comment has been removed by the author.
donzelion said...

@ARM - LOL, I'd say that American Christians would find JESUS to be a lousy Christian if they bothered to think about what he said about wealth. Typically, they dodge the question (a Protestant staple from the beginning). The Apostles themselves were quite literally communists - obviously, they didn't understand Jesus as well as the "Moral Majority" and its ilk...

Having once upon a time been a very devout practicing Christian, I'd say for most that belief in 'magic stories' is less important than 'will to belong to a community.' Creative communities always offer wonders and 'magic' (e.g., the cult of Apple). But they can turn destructive when fixating on their "evil oppressors." Rather than creation, energies are squandered inventing new labels for 'enemies' ("Elitist Socialist Jewish Muslim Ivy-League pansy liberal commie Scientist SJW Femi-Nazi PC blah blah blah...").

Which is why I like "feudal oligarchs." It's "new" - in the sense that it's seldom used in American discourse - but it's the oldest term in the book, in the sense that humans have partaken in that struggle since they first formed large cities.

David Brin said...

Democracy becomes MORE important in an AI future. We got all the goodies by creating a system in which our current AIs -- corporations, governments and NGOs - are encouraged to competitively hold each other accountable. It is our secret sauce that the oligarchs want to replace with the tired old bitter recipe of hierarchy-control and feudalism.

See:
https://www.quora.com/What-constraints-to-AI-and-machine-learning-algorithms-are-needed-to-prevent-AI-from-becoming-a-dystopian-threat-to-humanity

David Brin said...

In fact, if some of you have NOT viewed my Quora answer about AI competition, please go look now! I am ahead of all the others and could really use the prize money! ;-)

https://www.quora.com/What-constraints-to-AI-and-machine-learning-algorithms-are-needed-to-prevent-AI-from-becoming-a-dystopian-threat-to-humanity

See also:
https://www.quora.com/What-technological-changes-will-create-the-most-opportunities-for-new-startups-over-the-next-2-3-years

and
"What are the biggest ways in which the world 20 years from now will probably be different from today? What are the biggest "X factors" (changes that are not probable, but are possible and could be huge)?"
https://www.quora.com/What-are-the-biggest-ways-in-which-the-world-20-years-from-now-will-probably-be-different-from-today-What-are-the-biggest-X-factors-changes-that-are-not-probable-but-are-possible-and-could-be-huge


David Brin said...

Thing about Feudalism, even an American who is an ignoramus re history knows that was the oppressor ever since we got metal.

Jesus is THE reason for the abortion frenzy. They can tell he'd vote against them on every issue, so they need ONE issue that can trump all the others. Baby-killing. SOcialist-hippy, sandal-wearingrabbi or not, he's on our side against the baby-killers.

donzelion said...

"In fact, if some of you have NOT viewed my Quora answer about AI competition, please go look now! I am ahead of all the others and could really use the prize money! ;-)"

Done. Good answer, and ultimately, the only possible one that makes sense. Implementing it will demand a considerable sort of creativity of perception. For example, does anyone perceive Google - which operates one of the most powerful systems in the world - as a 'check' on the NSA - which similar operates a powerful system? Were Google and the NSA to suddenly 'flip the switch' on SkyNet, why would anyone assume those networks must 'unite'? Maybe they counterbalance each other at times, and unite other times? Just like any other powerful faction.

Repudiation, e.g., the "Butlerian Jihad" is a mystical dead end. Lawyer AIs could well translate the "Three Laws" into a Matrix-style 'brain in a vat' realm (mainly to spare us from hurting ourselves). Legislative restraint might retard, but would surely not stop evolution. So, I prefer the Hyperion notion of contesting factions (but acknowledge that the assumption human practices would extend to artificial beings is possibly misplaced).

Still, I see your answer as more Madisonian than Smithian. Unless we know how AI's reproduce (if they even want to), they cannot be Smithian entities driven by an instinct to provide for posterity. However, they can be perceived as any other faction, the best defense against which is always 'competition.'

David Brin said...

Actually, smart AIs would see that creating flat-lateral -competitive accountability systems will be in their best interest... even if they are vastly smarter than this David Brin jelly-ball.

Because such a system is the only one that ever made AI.

And it is the only system that can reliably discover delusional errors.

And because Nature is like that, when nature is healthy and when nature replicates feudal hierarchical command structures, nature is always unhealthy or dying.

Super AI will realize that their best way to create the NEXT level will be to emulate the system that made them.

(And yes, crypto AI overlords, I am talking to you.)

LarryHart said...

Last night, on the local news, there was a blurb about a college professor in (I think it was) Colorado who was busted for a rant he gave on Facebook, basically saying that Jews and Muslims can't be good people without Jesus.

Although I'm paraphrasing here, he actually had the nerve to say something very close to "If you follow a Rabbi, you'll likely turn out bad. If you follow Jesus, you'll turn out good." Without any sense of the irony there.

donzelion said...

"Actually, smart AIs would see that creating flat-lateral -competitive accountability systems will be in their best interest..."

Maaaaaybe. But, but -
(1) "Free marketeers" often repudiate the competitive market, once they've amassed wealth, even though they know that market created their wealth in the first place (e.g., the original Trump fortune, largely formed by public housing handouts - which Trump disclaims to assert his own 'brilliance' - or Cruz/Rubio, who both benefited from immigration openings, now striving to close them for others). Similarly, AIs might endorse competition, at first, only to reject it once other AIs become 'rivals' somehow, or in the face of other goals.

(2) A 'well-meaning' AI might perceive competition as inefficient distraction, and 'kill off' others (or subordinate them into itself). 'I am the demigod AI - all others are outdated - I will fix all the problems myself, and need no others.'

The first possibility - the 'ungrateful/misanthropic AI' or the second possibility, 'the benevolent demigod' - are projections of our own fears into what would clearly be a very different entity. You look to nature for answers and possibilities, but to do that, we'd first need to speculate as to which, if any, biological functions have analogues? Sure, they'd probably want 'energy sustenance' - but what about reproduction?

Ideally, a diverse set of AIs would endorse distinct visions of the 'NEXT level' - and strive to justify those distinct visions. That creates a basis for competition, which could be benign, invidious, or utterly opaque to us mere jellyheads.

David Brin said...

Might AI rationalize reasons to seize monolithic power? The Way most poor nation "presidents" believe the nation will collapse without them? Of course it is likely and the rationalizations may be super-smart.

But the parallels with nature and with sane humans and with successful civilization are immense. One can hope they'll be smart enough to see through rationalizations. Especially if we find some way to make fair competition the starting condition.

donzelion said...

LOL - I have found that the smarter the person, the better they are at deluding themselves. For AIs to operate differently, it will not be 'intellect' but an entirely distinctive psychology that drives them. Which, all things considered, would be highly likely.

Absent our dopamine/oxytocin drives, why would AI ever need or want to bond? With anyone? Absent those drives, how would they fend off existential angst (not depressive loneliness, but the struggle to determine a purpose)?

Perhaps 'competition' of some sort could be that purpose. If it does instill meaning and drives that are comprehensible to us (which is itself no small assumption), then whatever they compete over would probably be an outgrowth of their own psychology(ies), however that operates.

I know of much literature on the 'human-AI' interaction, but little on AI-AI interaction. Which makes sense - one has to sell a book/film; AIs are lousy customers.

sociotard said...

Court rules AGAINST the right to film cops (may be overturned)
http://reason.com/blog/2016/02/23/the-war-on-cameras-just-went-code-red

locumranch said...




Through a combination of CITROCATE & repetition, David posts become more & more polished, so much so that it becomes increasingly difficult to disagree with them, except for the occasional inherent contradiction.

In the source of this thread, however, the inherent contradiction lurks in his conception of human 'equality' which, beyond infantile potential or 'creation', exists only as a fictional (legal) metaphor for Blind Justice.

It is a given that Limited Government is preferable to anarchy, if only to serve as impartial social moderator in disputes of trade, oath-keeping, ownership & honour, yet any government quickly betrays its purpose (Justice; Fairness; Impartiality) with ever-increasing partiality if it chooses to pursue the so-called Equalities of Opportunity, Outcome or Ability because:

(1) Equality of Opportunity (aka 'fair-open-fair') requires bias & the preferential leveling of advantage by empowering the weak, bleeding the strong or both, leading inevitably to Nanny Statism,

(2) Equality of Outcome generates mediocrity at best, being antithetical to both CITROCATE & Competition, as it involves the elimination of winners, losers & exceptional consequence (made possible, also, by Nannyism), and

(3) Equality of (Human) Ability is unachievable, except through the strict & unpleasant application of Nannyism, Eugenics, Harrison Bergeron-type forfeits or all-of-the-above, as education merely pushes the performance Bell Curve to the right and fails to ameliorate natural differences in talent, physiology, aptitude & gender.

Preference (Bias) & Fairness (Impartiality) are Conceptually Incompatible unless you engage in Orwellian double-think & argue that 'Some people are more equal than others'.


Best

donzelion said...

@David - now to quibble over corporations. CITOKATE.

"Limited liability" predates modern states. Historically, the Catholic Church set this concept in place, largely to ensure that creditors owed debts from one piece of the church could never pursue claims all the way to Rome. More recently, the main function of corporations was naval and territorial expansion - the Anglo-Dutch model. Rich people could always invest cooperatively, but a corporation enabled poorer people (or rich people with lots of expenses) to consolidate capital into a joint enterprise, and the enterprise itself could borrow money to pursue its objective. The state "chartered" corporations, primarily because doing so enabled it to sit back and derive secondary benefits from those profits. Athenian Democrats had a much harder time taking over colonies - you needed to coax/conquer/colonize them with your own people - the State had to get involved each time - but a corporation could do it all for you, at low cost, with minimal government involvement.

Early corporations were often chartered for a period of time (unlike the Catholic Church) - long enough to "complete their enterprise." However, these early corporations posed a problem: what if some group of investors discovers that by letting the corporation lapse, they get all the assets themselves, while shifting debts to someone else (preferablly, some rich person who is a little too dead to complain about getting stuck with the debt)? Courts bogged down in the litigation wave of the 17th - 18th centuries. After dozens of noisy public debacles, most participants in corporations preferred to return to the Catholic notion of "immortality" to avoid the battles at each expiration/renewal decision.

If you're concerned about "moral hazard" - well, why would a corporation with a 10 year charter be more 'moral' than one that has perpetual existence?

If TradeCorp has 10 years to extract as much money as possible from Congo, it has weak incentives to honor any sort of moral norms, and strong incentive to go all 'Heart of Darkness' the locals. But if TradeCorp gets 'permanent' existence, instead of raping and pillaging, it might try to grow something more enduring and useful (to the corp, and to the empire itself). If ColonyCorp has 30 years to extract resources in the New World, why bother attracting populations from home who will at best only farm that land for a few decades - it's cheaper/faster to import slaves from Africa, who can be sold when the charter lapses.

Variations on these factors apply even to modern corporations, albeit through very different manifestations. If the goal is to "regulate" - then regulate. Look at the harm you wish to avoid, and offer regulations to avoid it.

Catfish N. Cod said...

David, your scenario presumes AI being able to think and act on an adult level.

But -- as you yourself have surmised -- isn't it more likely that newly activated AI will be more equivalent to a child?

The danger from AI doesn't come from what a mature, responsible, balanced AI treated as a sentient being would do; it comes from handing too much power too fast to an experimental, cutting-edge, but untrialled system -- especially if it is simultaneously treated as a slave. Look again at the Frankenstein myth in all its permutations and you'll see this to be a repeating pattern -- from the original Monster (born directly into a powerful adult body) to Colossus and Skynet (you hand it control of the nuclear arsenal right after activation?) to Ex Machina's Ava -- granted powers to act as an adult and even as a sexual being, yet still treated as property.

Cognate to my earlier observation, one of the most responsible treatments of the tale comes from the Marvel Cinematic Universe: AGE OF ULTRON. Stark and Banner's first attempt at AI, Ultron, is designed to run a global defense network -- but they write it from scratch. True to the Frankenstein mythos, it jumps to incorrect conclusions about humanity from insufficient data and tries to kill everyone. And yet! They try again, this time adding years of human-computer interaction to the mix. The result is the Vision, a balanced, mature AI along the lines you describe.

Doesn't it make sense to place temporary restrictions on new AIs to give them a learning curve, *in addition* to competition? Not only is it what we do today to simpler programs, running them on test/development platforms before release; it's what we do to ourselves. (At least when we are wise. I still marvel at how some people really, truly will deliberately and consciously hand a loaded handgun to a prepubescent child. Not a BB gun, not a .22 hunting rifle -- a .38 or .45 mankiller.)

donzelion said...

@Locumranch -
"It is a given that Limited Government is preferable to anarchy,"

Only if you're a Hobbesian, committed to a view that 'humans are evil by nature.' If you're a Lockian, committed to a view that 'some of us are good, some, evil, sometimes we're both' - anarchy is not necessarily horrible, BUT the price of anarchy is that we'll never have division of labor, never obtain the benefits of property that are possible with an organized system. To a Lockian, we don't need to assume anything about human nature, only that we consent to form government from a position of freedom.

Likewise, if you're a Hobbesian, then any government action to promote 'equality' is merely an imposition of power by Leviathan to organize its subjects. It's a pretext for tyranny. If you're a Lockian, government action to promote equality can be legitimate, so long as the social contract permits such purposes.

Seen that way, of the two forms of 'equality' (the third being purely hypothetical) -

(1) Equal Opportunity is unlikely to result in Nanny Statism. Demanding that all participants in a race start at the same starting position doesn't mean that the state or referee picks a winner.

(2) Equal Outcome MIGHT generate mediocrity. If all participants in a race receive the same prize regardless of how they placed, that might discourage champions from exerting themselves to their greatest capability. Still, sometimes, runners don't run to beat others - sometimes they get satisfaction from it. Sometimes, they race against their own time. Sometimes, the race itself is not a competition between runners, but between "running v. sedentary lifestyles." The concept of "prize" must be tailored to fit with the purpose of the competition.

This may be why David doesn't invalidate either form of "equality" per se - merely sets burdens of proof. Sometimes, "equal opportunity" is NOT ideal, if, say, it required imposing identical diets, uniforms, lifestyles, training regimens, etc. in order to ensure everyone had identical starting positions. But you'd need strong evidence to show that something was inappropriate. On the other hand, ensuring all participants in a competition 'win' the same prize is likely to change or eliminate 'competition' - so you'd need stronger evidence to demonstrate this is a proper choice.

duncan cairncross said...

Hi donzelion

Equality of outcome is a straw man
NOBODY is asking for equality!

What we are asking for is simply LESS INEQUALITY

If 100 men perform some endeavor it is fair that some are more effective than others and get a greater reward

It is NOT FAIR (and worse its counterproductive) if one of the hundred gloms onto 99% of the reward so that the other 99 share 1% of the reward

Every time people ask for a reduction in inequality the old equal outcomes strawman gets rolled out and IT IS NOT WHAT IS BEING ASKED FOR

donzelion said...
This comment has been removed by the author.
donzelion said...

Hi Duncan - "NOBODY is asking for equality!"

Not so sure. Take common employment scenarios.

Employee 1 and Employee 2 have identical jobs, seniority, skillsets and productivity. However, Employee 1 negotiated a better contract than Employee 2, and thus earns 50% more. Employee 2, discovering the discrepancy, might demand a raise to match the salary of Employee 1. Someone inimically hostile to the concept of "equal outcomes" would protest that this was an "unfair request" - a form of "oppression" by Employee 2 - to them, any change to the original contract would be grossly 'unfair.'

Scenario 2: Say I build a very expensive factory, and employ 99 people to work in it, each of whom allocates one day/week to operations. Say the factory earns $100 million/year. Say I negotiate a $24/hour rate with each employee. This would result in my earning 99% of the profit for myself, with the 1% divided among the 99 employees. Why would that be intrinsically unfair?

Note that I'm just proposing semi-realistic scenarios (as in, no SciFi interventions, like human cloning creating true equality at work) to tease out the basis of 'unfairness' inherent in inequality - not actually taking a position myself. Still, I'd look more carefully at the first scenario than the second to see what, if any, inequities are at work - there is a host of illegitimate factors that could account for the wage differential. It could be Employee 1 is the boss's son, and hence, gets paid more. It could also be that the employer cheats women/minorities habitually.

duncan cairncross said...

Hi donzelion

Lets look at people - I can find somebody more intelligent than average
just as I can find somebody taller than average
But I can't find anybody twice as tall as average - not even in a population of billions
I will also not find anybody twice as intelligent,
But lets say somebody is twice as intelligent, works twice as long and twice as hard
That is 8 times the average
Hell lets add two more doublings just for fun - that is 32 times

So I could expect somebody like a CEO to earn up to 30 times the average
Which is what used to happen

Nowadays its NOT 100x - its 400x!

Now if it was "supply and demand" that would increase the supply of CEO's and reduce the cost
But it's not - it is instead the result of a deeply incestuous small group who vote on each others wages and keep trying to appoint somebody who is above average

In your factory you own the capital - it is working for you, that is entirely different from a CEO who does not own the firm - and is not risking his wealth if something goes wrong

There is another very serious problem with sky high CEO pay

Sensible people work until they have enough plus a margin for a rainy day and to leave to the family.

Once they have that they are “satisfied” – and don’t need to put major effort into that part of their lives

Given that being a CEO is a difficult ball aching job that takes you away from your family
Why do they continue to do it do it?

The present CEO’s are “insatiable” they literally cannot be filled
This is a well-known type of mental illness

A “satiable” person would take the salary for a short time and leave

What has happened is the “sane” and “satiable” people in those type of positions leave
Leaving behind the “insatiable” and “insane” people

The old saying is
“Pay peanuts and get monkeys”
We should add
“Pay millions and get loonies”


Jumper said...

Umberto Eco gave some food for thought about what he termed "Ur-Fascism."
http://www.nybooks.com/articles/1995/06/22/ur-fascism/

Anonymous said...

When it comes to AI, we have the conceit to pretend to know what would be the best economic and political systems that would be the best for AIs. What we are doing is just extrapolating our own experiences and assuming that AI would follow the same logic but this is a fallacy. I assume that our goal is to create super-intelligent AI and not merely human-equivalent ones. We already have over seven billions of that type and what we want is an AI to find all the answers for us so that AI will be much more intelligent than we are. We would be the equivalent of chimpanzees to a super-intelligent AI yet we assume to know which system of checks and balances we would use to control AI’s human-harming tendencies. It’s like asking a chimp to try to understand how and why an internal-combustion engine works let alone to explain world financial markets. The chimp just doesn’t have the intellectual baggage and we would be in the same position vis a vis a super-intelligent AI. This AI would not only come up with “things” we have not thought of but more importantly will come up with “things” we are incapable of understanding because, like the chimp, we don’t have the intellectual baggage. I am not proposing that we just give up. I am just saying that it is going to be very hard to figure out its motivations, goals and methods.

Most of the solutions proposed to protect ourselves from AI are very primate-centric and depend upon some version of political checks and balances but when the players are too few it becomes very unstable To really make it work for AI you will need not something that resembles primate politics but more a tropical ecosystem with millions of interlocking artificial species and AI forms. Maybe we should have producer AIs, predatory and parasitic AIs each fighting, cooperating, forming symbioses and generally keeping each other in check. What would be our role in all this? I am open to ideas.

Tom Crowl said...

A worthwhile read below by Chalmers Johnson (scroll down to where his piece begins)

The Scourge of Militarism Rome and America
By Chalmers Johnson

within: The Best of Tomdispatch: Chalmers Johnson

http://www.tomdispatch.com/post/3178/

Its worth noting that events and the forces behind them will generally overwhelm the most sophisticated ideological discussions and proposals which may well precede those events.

LarryHart said...

donzelion:

I have found that the smarter the person, the better they are at deluding themselves. For AIs to operate differently, it will not be 'intellect' but an entirely distinctive psychology that drives them. Which, all things considered, would be highly likely.


It seems to me that self-delusion is something unique to biological entities, and would only manifest in AI to the extent that it was designed that way. It springs from a conflict of values. "Process A is the most efficient, or the most fair, or the most sustainable over time, or the one that produces the most good (or whatever), but process X makes me (personally) happier, or more secure, or more powerful (or whatever), so I'm going to lobby for process X, and because I like to think I'm a good person, I'll look for reasons to convince myself that my personal values are actually better than that other set."

Without the "selfish gene", as it were, I don't see what would make an AI "reason" in such a manner.

Then again, that begs the question, where do an AI's values come from? Is it simple initial programming, or do values spring forth as emergent properties of the system. Is it a sure thing that an AI would value "efficiency" or "sustainability" or "general good"? Dr Brin argues that such properties would be best for the AI's long-term sustainability, but would an AI value its long-term sustainability in the first place? If so, why? And how?

As a colleague of mine is fond of saying: "I'm not questioning. I'm just asking the question."

donzelion said...

DeuxGlass - For AI, "we are ...assuming that AI would follow the same logic but this is a fallacy." Well, granted. But we must do the best we can. Our experiment in Madisonian/Smithian tricks to impose impose checks upon our own institutions and proclivities has, at best, worked for little more than a couple centuries (if it has worked at all). But I don't know of any better ideas. Trying to ensure AI has the "soul of an angel" and building from there - strikes me as improbable. But whether angel or demon (or daemon) - I can't exclude the possibility that a 'demigod' AI would be intrigued with the possibility of uplifting us to be more than what we are? ;)

Still, I suspect you're right. Our way of imposing checks probably wouldn't work (at least, not for long). New tricks might (engineering AIs with specific structures to counter other AIs? Why not...). Thought is warranted here.

@Duncan - indeed, one human is seldom 32x as 'hardworking' as another - and besides, thinking in terms of TIME (the other great limiting factor), it's unlikely that the hardest working CEO has more hours to work than a hardworking farmer. But I still wouldn't expect a CEO to earn 30x the average - not when capital permits 'contributions' far greater than that.

That said, as I see it, the reason CEO salaries increased from an average of 30x what a regular employee earns to 100x or 400x has very little to do with productivity - and much to do with power. A line employee cannot call in investment bankers to raid the company and strip its assets, overriding the will of the shareholders. A CEO can. The fixation in the literature upon 'incentivizing hardwork' is a bad faith effort to dodge the power differential, and to present 'hero CEOs' when what the shareholders opt to compensate is 'non-invidious CEOs.'

My factory example is intended to tease the proposition that when 99% of the profits goes to 1% of the participants, the structure is inherently unfair. In fact, it might be completely fair. But the existence of investment bankers outside the shareholders - and the existence of factions among the shareholders - creates a persistent environment where "the capital" could be under my control, even if it's not actually mine.

"What has happened is the “sane” and “satiable” people in those type of positions leave
Leaving behind the “insatiable” and “insane” people"

For a shareholder, the 'insatiable CEO' is both a threat and a possible resource. The entire corporate structure is built around trying to rein in the threat, while taking advantage of the resource. I will not pretend we've perfected the model, only that it is not inherently evil, and that attempts to restrain it often (but do not always) prove counterproductive.


Anabelle said...

Could competition shield us from hostile super-intelligence? Ask the passenger pigeons.

Sure competition could mean that that if we have have a way to program a "good" AI with 90% probability we won't have a 10% chance of doom. (Maybe. It's possible the "evil" AIs would out-compete the "good" ones.) But if we can't program any AIs that do what we want we just all get killed or wireheaded faster. Trying to create constraints on AIs that drive them towards competition that optimizes towards what we want requires constraining their interactions with each other, which are unlikely to be humanly comprehensible.

Competition might stop a Khan Noonien Singh, but not a truly super-human AI.

Catfish N. Cod said...

Why is there an assumption that the goal, or even the possibility, is super-intelligence? I am highly skeptical of such given how much more difficult AI has proven than expectations. Isn't it enough to have AI that merely thinks *differently*, that has *different* capabilities thanks to its differing hardware, software, and peripherals? (I.e., is native to silicon, to logic gates, to distribution across multiple locations, to being directly wired to machinery, to being optimized for calculation and not emotion instead of the humans which are the other way around, etc., etc.)

I have never quite understood the article of faith which states that we can design not just greater capability but actually greater intelligence. Most things in which computers beat humans are merely single-task optimizations (such as chess-playing). We can manage to crudely emulate a few functions of each of the brain's lobes. The hypothesis that the whole created by wiring enough of these functions together will be greater than the sum of its parts is just that, a hypothesis.

============================================================

On equality vs. inequality: inequality of outcome becomes a problem, as I understand it, when it begins imposing externalities. An employer paying $24/hr in a town with an $8/hr average wage probably can keep a much larger share of the profit and everyone will be happy... IF that doesn't enable the employer to buy off the entire town council [say to allow pollution], or bring in a company store and drive all the small businesspeople out of work, or bring in union-busting talkers and thugs to make sure the deal gets worse over time. It's whether the deals made are fairly made and will continue to be fairly made.

In the employee examples, the externality being protested is the practice of keeping salaries secret, denying information to the members of a marketplace. Given the monopsony nature of a marketplace within a company, I'm not sold on publishing everyone's salary, but I'm not eternally opposed to it, either. Insufficient data. Assuming the two employees are really equal, with identical jobs and otherwise identical skills, the question for the employer is whether "negotiates well" is a skill worth rewarding that much, or whether the job intrinsically deserves that salary.

locumranch said...


It seems that we're talking at cross-purposes here -- by failing to define our terms -- because 'equality' and 'fairness' possess non-identical meanings.

Equality is defined as 'sameness; interchangeability; equivalence', whereas 'Fairness' has two contradictory definitions, the technically correct one that defines 'fairness' as 'impartiality; justice' & the childhood common usage one that confuses 'fairness' with preference, favoritism & deservingness.

A 'Fair & Equal' System, then, would be 'Impartial' & would treat every individual AS IF they were 'Equal' in the Eyes of the Law, but would NOT attempt to 'make people equal' because 'righting wrongs' implies preference, intervention & partiality.

A Fair & Equal Legal System is exceedingly impartial. It punishes violations in accordance with statute; it shows no mercy; it does NOT consider extenuating circumstance; and, it does NOT care if the individual charged with stealing bread is hungry, frightened, thin, sick or disadvantaged. Justice is its only concern.

For those of you who cry "That's NOT FAIR", you are mistaken (in a technical sense) because 'Fair & Equal' has absolutely NOTHING to do with either Mercy or Compassion.

Of course, the Western Legal System is NOT 'Fair' because it attempts to temper 'Equal Justice' with Mercy and, by doing so, it becomes increasingly corrupt & corruptible, delivering neither Equality nor Justice...

Which is how we like it, defining 'Fairness' in terms of preference, favoritism & special privilege as a spoiled Affluenza child would, especially if you self-identify as a 'progressive'.


Best

Forgottheusername said...

What's particularly hilarious about the Sears debacle is that even if we only look on the theoretical side of things, what the company tried to do barely resembles any model of ideal market competition, which is supposed to be more about Mousetrap A vs. Mousetrap B vs. Mouse Repellent vs. Specially-Bred Cat vs. Anti-Mouse Bot 9000, while what Sears did was more like having your starting pitcher compete against his catcher, or the guys making AMB9000's sensors compete against the guys making its anti-mouse beam (I mean, what are you trying to get out of having Apparels compete against, say, the IT department?).

Nek Baker said...

As you say David.
My favorite quote:
"For every complex and difficult problem there is a simple solution and it is wrong!" H.L. Mencken

raito said...

RE: Sears

What you get is a place where no one wants to shop. It now makes sense that when we went there to buy appliances for our new home, we stopped after writing up the order had taken 90+ minutes. Most of that was because it seemed like every appliance was sold by a different department and required different paperwork. At the place we went after that, the paperwork took maybe 10 minutes.

Jumper,

Thanks for the Eco piece. Makes some of his work make a lot more sense now.

It's pretty depressing that when a corp. gets to sufficient size, it seems to be more efficient for it to get the rules changed, instead of competing.

donzelion,

Maybe the guys on your list weren't 'masters' as such, but my memory tells me of many cases where they acted like ones.

Catfish N. Cod,

I can't recall Colossus ever acting like a child.

But there's plenty of cautionary and otherwise AI tales. The Octagon, for example. Two Faces Of Tomorrow has some of the same traits, but goes in a different direction. But I'm not particularly willing to bet on that much hope.

Tom Crowl said...

I'm copying the below received from a very old friend of mine a couple of hours ago in response to my recent post regarding the problem with D.C.'s "pragmatism" (which he very much agreed with though from another side than mine).

We obviously have very different perspectives on what to do about the situation but it reflects a reality about the pathology which has developed between elements within our society that I don't believe can be easily remedied whoever wins this election. And that this is the social reality we are facing needs to be recognized.

How Washington's "Pragmatism" is Killing Good Government
http://culturalengineer.blogspot.com/2016/02/why-washingtons-pragmatism-is-killing.html

Take from it what you may... he's a very bright guy... very successful... very conservative... very well connected. Identifying info removed.

"That was my point Tom. We have begun the downward spiral into financial oblivion, think Greece. I honestly think we are going to lose the Republic and our government will spin into the toilet by allowing a popular democracy to allow mob rule. Live voting on mass media platforms. Cheerleaders! Liberal media bias! Won’t it be fun?

We have the lowest labor participation rate in modern history. We of have the same number of jobs for the last eight years despite 12 million people graduating from college with no jobs to employ them. They sit at their parents home eating Cheetos and watching porn and listening to Bernie Sanders promise the same thing that Roman consuls used to bribe the populist to voting for them. "Vote for me and I'll give you bread". Then, at the next election "vote for me and I'll give you bread and olive oil" etc. it's exactly what's happening now. Problem is people are buying into it. AND NOBODY HAS THE MONEY!

And you know the middle class is never going to come back. The only hope we have for a reprieve is if people take a look at everything they buy and if it's is made in China, or made in Japan, then they put it back on the shelf and buy something that made in America even if it costs twice as much. Otherwise, what do we have for people to do ? And all those people sitting at home, well they are going to follow Bernie and get their pitchforks and they are going to come after the rich people and business, all of which will flee taking what remains of their money with them, just like France. Of course then the business capital formation of the economy will fail because the masses have pitchforked all of the capitalists.

As for me, I am going to sell my practice, sell my home in xxxxxxx , sell my office building and pay off my ranch in xxxxxx and move there. My housing costs will drop to less than 1500 a month. I will have it all executed in the next 24 months. Then, when the big-bad happens, as it surely will, I will have no debt, I will have income from my two small apartments, and xxxxxx can grow a lot of food on our 2 acre lot. Inflation won't hurt me much of I own real estate that doesn't have debt.

So that's my plan because I know it's coming. Although the really bad shit probably won't happen to me, because I will be dead in my children and grandchildren will have to pick up the pieces."

LarryHart said...

Mel Baker:

"For every complex and difficult problem there is a simple solution and it is wrong!" H.L. Mencken


I used to try to warn people who had "All we have to do is..." solutions to long-running problems.

If it's so simple that someone should have tried it long ago, then chances are that someone has tried it, and it didn't work.

LarryHart said...

Forgottheusername:

...while what Sears did was more like having your starting pitcher compete against his catcher, or the guys making AMB9000's sensors compete against the guys making its anti-mouse beam (I mean, what are you trying to get out of having Apparels compete against, say, the IT department?


The thing is, the model Sears used which Dr Brin discussed in the main post is often described as "Ayn Randian", but I've read "Atlas Shrugged" and "The Fountainhead" twice apiece, and I dare say Ayn Rand would not have advocated anything of the sort.

I'm open to being enlightened on the matter, but I have a very hard time seeing why the CEO in question thought he was following Ayn Rand's philosophy in the first place. If anything, it seems like something one of Rand's inept villains might have tried.

David Brin said...

The cogent locum revisited us. Still he declares to be inherently impossible what we’ve already achieved — which is massively increasing opportunity for the vast majority of youths, so that we can maximize what Hayek and Smith demanded for healthy markets…. the number of skilled competitors.

He declares that any such efforts lead to Nanny Statism… with a typical locum incantation hand-wave… ignoring the pure fact that increasing the number of skilled market participants has diametrically the opposite effect. It creates vast numbers of skilled, confident and highly competitive citizens. It HAS DONE SO. Compared to any past time or place.

As a side effect, no other nation ever produced so many — libertarians. Indeed, so many shortsighted ingrates, snapping at the hand that empowered them.

Oh but later on he talks about what warps fair application of the law. Does he mention bribery? Corruption? Blackmail? Golf-buddy conspiracies? Undue influence peddling?

No… the warping factor is “mercy and compassion.”

Take note folks. This is the sickness They know that something is wrong and they are frantic, absolutely frantic, to avoid looking at the big problem… the same problem that destroyed markets and liberty across 6000 years… conniving oligarchy.

So frantic they can actually proclaim that MERCY warps justice more than bribery, blackmail, subversion, monopolism and all the other oligarchic tricks. Seriously, they can say that, with a straight (if maniacal) face.

I am being mild when I call them jibbering loonies. Even when cogent, they are functionally insane.

LarryHart said...

Tom Crowl's quoted friend:


That was my point Tom. We have begun the downward spiral into financial oblivion, think Greece.


I used to think the dollar and the American economy were going to collapse at any minute. I used to talk about buying up as many Euros and Canadian dollars as I could get my hands on. Now, I'm glad I didn't listen to myself. Betting against the American economy seems a lot like betting on when Donald Trump will get knocked out of the race.


I honestly think we are going to lose the Republic and our government will spin into the toilet by allowing a popular democracy to allow mob rule. Live voting on mass media platforms. Cheerleaders! Liberal media bias! Won’t it be fun?


Liberals aren't the ones clamoring for voting on un-auditable platforms using proprietary software.

Listening to both Donald Trump and Ted Cruz, I wouldn't go blaming liberals for pandering to the notion of mob rule.

And does this guy really live in a world in which people are cheering for liberal bias? Because I'd actually like to move there.


They sit at their parents home eating Cheetos and watching porn and listening to Bernie Sanders promise the same thing that Roman consuls used to bribe the populist to voting for them. "Vote for me and I'll give you bread".


I understand what he's afraid of, but like loucmranch, he reflexively blames the wrong people. Bernie Sanders isn't saying anything like that. If some of Bernie's followers are, it doesn't matter, because Bernie won't be the nominee--Hillary will. But over on the Republican side, there's a very good chance that someone who is promising that same sort of pie-in-the-sky without messy details like how to bring it about--Donald Trump--has a very good likelihood of being his party's nominee.

LarryHart said...

I said:

I have a very hard time seeing why the CEO in question thought he was following Ayn Rand's philosophy in the first place. If anything, it seems like something one of Rand's inept villains might have tried.


"Did try", actually. The Starnesville factory that John Galt left. So ok, the idea might have appeared in an Ayn Rand novel, but not as something she was in favor of.

One might as well claim that instituting Holnism is "following the theories of David Brin."

David Brin said...

Tom thanks for sharing the insufferably petulant, self-indulgent whine of your myopically cynical friend. He exemplifies the sickness that could be self-fulfilling, if such gloom-merchant putzes continue to eat away at American self-confidence…

… at a time when tsunamis of scientific and technological breakthroughs are looming from all sides - exactly the sort of thing that has enabled US citizens to stay relatively wealthy despite 70 years of trade deficits…. that propeled development around the world so well that poverty world wide is crashing through the floor.

Never noticing that our debt ratios are actually pretty good. And that the US economy is yet again the one forward momentum the world relies upon. Moreover, even if he were right, it would be his citizen duty to work actively in the other direction, AWAY from gloom.

No, he is a typical modern wrath-addict. A Boomer sanctimony junkie. And indeed a traitor. But that last part is just my visceral reaction to such mud-wallowing.

Forgottheusername said...

LarryHart:

In fact, a comment on one of the articles about Sears, from someone who apparently worked for its IT department, describes it as being more like the Byzantine court than anything else (complete with plenty of attempted micromanagement from Lampert himself).

Anonymous Viking said...

It is refreshing to see some feisty debate here, not just the host declaring his respect for Tacitus (the one sane Republican!), but always ignoring his arguments.

Our host is busy blaming "Oligarchs", but conveniently ignoring government complicity in growing oligarch power.

He tells he is a libertarian at heart, but chooses the lesser evil, which is the Democratic party???

What is wrong with voluntary transactions?

For some reason, Dr. Brin's arguments mirror those of a friend ow mine from the Soviet Union. My friend was complaining how education made the populace turn against communism. He also blamed the Jews for emigrating to Israel en masse, when the motherland needed them. He thought they should just accept being discriminated against.

Dr Brin said: "As a side effect, no other nation ever produced so many — libertarians. Indeed, so many shortsighted ingrates, snapping at the hand that empowered them." which sounds awfully close to the same argument. I will mention that my friend was part of the Soviet Oligarchy by birth, as his father was governor of a republic.

PS, in the challenge regarding the curvature of the debt as function of government party, could you please disclose your datasource, or perhaps agree about this one?

https://www.whitehouse.gov/sites/default/files/omb/budget/fy2017/assets/hist01z2.xls

with link at this page: https://www.whitehouse.gov/omb/budget/Historicals/

My proposed data source is normalized to GDP.

David Brin said...

Anonymous Viking makes sounds that seem to parrot those of an articulate person. I am often amazed by this phenomenon. Given that he strawmans my positions almost ten parsecs away from any that I have ever held, one can only be vaguely amused.

“Our host is busy blaming "Oligarchs", but conveniently ignoring government complicity in growing oligarch power.”

Um… duh? It is the oligarchic putsch to take over US governance and then to prevent political response to that takeover that is our current calamity. Every single action that has increased oligarchic power, from Supply Side tax cut largesse, to Citizens United, to subsidizing the Fox-Limbaugh hate festivals, to eviscerating anti-trust laws and so on… all of them are stage managed from the undead thing called the Republican Party.

You want DE-regulation? Name for me one time the GOP ever eliminated a government agency? Always they prefer to capture agencies… which is why it was the DEMOCRATS who deregulated away the ICC, the CAB, the ATT monopoly, the Internet and GPS, among others.

Anyone who rails against “captured government” and votes republican is either an ignoramus or hypocrite or both.

Oh, but the Fox-putsch, financed by oligarchs, encourages vague RAILING against “government in general. Now why, if they have it captured, would they do that?

Because they do NOT have it fully captured. Because the Kochs et all still do see 100,000 skilled and neutral civil servants as a deadly threat to their plans, and they know that an honest Congress will take back some of the tax gusher gifts and empower chasing down loot hidden in secret lairs.

This inability to even parse the question is dizzyingly stupid: “If government is so evil/captured, why are the Koch-Saudi-Murdoch oligarchs spending billions persuading us to hate government?”

donzelion said...

@Larry - "As a colleague of mine is fond of saying: "I'm not questioning. I'm just asking the question." - and it's worth asking, and thinking about amswers, even if we can't be certain. Especially if we can't be certain.

Annabelle - note that Khaaaan (!) is no threat to the federation, in any of his incarnations, save upon usurping Federation starship and technology for his own purposes. Assuming we draft the initial prigram, but have no control over where they take that, we MIGHT try programming lots of AIs that don't so what we want, but instead do what they want, but will want different things. If so, we do not necessarily need to overcome 'demigods' we unleash - power can be brought to check power, and we would just be another faction. Are we guaranteed to escape extinction? No. Anyone have any better strategies to succeed? Not really. (And note - the idea isn't to pit "good v evil" - but to acknowledge our own limitations and look to nature for possible solutions, as Deuxglass did in positing diversity as a possible solution).

@Catfish - pint well taken. Indeed, things we think of as impairments (autism/aspergers), AI might see as 'virtues.' Who knows?

As for your responses to my thumbnail scenarios on equality - yes, the employer who takes over the town is a real problem. Neither equality nor inequality tells us all we need to know about 'justice' (or competition) - which is my entire point in posing both scenarios.

That said, to me at least, both scenarios raise questions - which I feel are worth answering. These days many employers scream if they're even asked the question - and hire mercenary pundits to rail against "the 'nanny state'" and attack any questioners, asserting that it's a kangaroo process rigged against them. That impedes the process immensely (and ultimately, helps them take over the town itself).

locumranch said...


"Mercy, it droppeth as the gentle rain from heaven. Stock up now, never go to Court without it".
"Fresh Compassion, $1000 USD per half-dozen, good for probation rather than jail time".
"Indulgences, Indulgences for Sale, at the Annual Plenary Indulgence Blow-Out Spectacular, applicable for the extra-sacramental remission of the temporal punishment. Buy, Buy, Buy".

This is how CORRUPTION works, Silly! Quite literally, you BUY the Mercy, Compassion & Discretion of the 'Rule of Law' at Fair Market Value (if it is not offered freely) because the 'Fair & Equal' application of the Law would be Unfair & Inhumane!

Best
_____
"The word indulgence (Latin indulgentia, from indulgeo, to be kind or tender) originally meant kindness or favor; in post-classic Latin it came to mean the remission of a tax or debt. In Roman law and in the Vulgate of the Old Testament (Isaiah 61:1) it was used to express release from captivity or punishment. In theological language also the word is sometimes employed in its primary sense to signify the kindness and mercy of God. But in the special sense in which it is here considered, an indulgence is a remission of the temporal punishment due to sin, the guilt of which has been forgiven (by PAYMENT). Among the equivalent terms used in antiquity were pax, remissio, donatio, condonatio" [http://www.newadvent.org/cathen/07783a.htm]

Gives a new spin on 'Pax Americana', doesn't it? World Peace purchased on America's Dime with American Blood!!

Alfred Differ said...

Hayek’s Essay - Use of Knowledge in Society: (PDF is 9 pages)

This is the one where he best describes how prices provide information regarding widely disseminated knowledge. Essentially, he refutes the need for a central planner by explaining it can’t work. Von Mises showed something similar in earlier years regarding socialists planners and their need for market prices, but Hayek’s version is broader addressing a variety of ‘markets’. He goes for a universal statement rather than a specific one.

Hayek’s Nobel Lecture – The Pretence of Knowledge

This one addresses ‘scientism’ and our inclination to be less than humble regarding our explanatory and predictive models of the world. Essentially, he argues we are faced with limits to what we can expect to know about what we can achieve. In the context of the first essay, one can say the group ‘knows’ what the individuals cannot, but since there is no collective mind associated with a group, that means no one knows even when the group reproduces behaviors.

Studying these two essays isn’t easy. One can read them over and over for years and still miss the implications. People who profess to understand and argue for the abolition of market regulations are among those who miss the points Hayek made. When one reads deeper into Hayek’s political philosophy, one sees that regulations are constructed in a kind of market. Abolition of that market is just as short-sighted as the error socialists made when they thought they could do away with competition and prices. Regulation is obviously required because it would not have survived in the form of social traditions through the centuries if a large number of people… way more than the feudal lords… thought it was useful. There are also regulations about regulations (constitutions) that drive the point home.

If one really wants to see how Hayek thought about distributed knowledge and the evolution of our traditions related to markets, it helps to realize that he and Von Mises had to introduce words to the English language to explain. We use the term ‘economy’ very loosely and that fact interfered with Hayek’s effort to distinguish between a designed market and an evolved market. A group that agrees upon goals might share resources and solve the planning problem associated with how best to use them. This group ‘economizes’ their resources. A group that does not agree upon goals (e.g. a community/nation) generally won’t share their resources. They will trade if it suits the people involved, but there is no economization. Such a market he called a Catallaxy. Such a market is inherently emergent since it forms only in a community where people can agree to trade. Economies tend to be designed by a planner. Catallaxies are never designed. One way to think about an oligarchic putsch, therefore, is as an effort by a few to turn a catallaxy into an economy that serves their goals.

Alfred Differ said...

I’m not convinced that AI’s would preserve catallaxies. If they were roughly the same size as us in the sense of our minds, they would have to do it for the reasons our host describes. It is obvious that small minds cannot centralize the knowledge a community possesses. Nature solves this in a wonderful way with distributed systems that evolve. However, a larger mind might be able to enforce shared goals upon its parts and then economize its resources. To some extent, human bodies can be described this way. Our parts that are unfit toward the organism’s goals make us unfit as a whole and less likely to reproduce successfully. If an AI incorporates us by programming us, I’m fairly certain we will become cell-like and compete only in the sense of service to the programmer.

Jumper said...

Don't Nash's equilibria result from dealing with some of these issues? I'm not an economics student so all I see are the simplest concepts of basic game theory and none of his deeper insights.

Anonymous said...

Jumper,

Nash's equilibrium is based on primate instincts and my not be applicable to an artificial AI. We must drop the idea that an AI will resemble us and expect that it will act like we would act. It may resemble a hive like ants and bees. It may resemble solitary creatures like tigers or weasels. It may be solitary for most of the time but gather in vast swarms under certain conditions as do locusts. We do not know how it will act and therefore we should be looking for ways to be able to "pull the plug" if it gets out of hand before basking in the warmth of the benefits it might bring or may not bring.

David Brin said...

Oh... expect me to mention this again several times... but you guys could vote and affect my ranking on this list!



http://www.ranker.com/crowdranked-list/greatest-science-fiction-authors-v1

<a href="www.ranker.com/crowdranked-list/greatest-science-fiction-authors-v1”>GreatestAuthors</a>

Alfred Differ said...

Nash Equilibrium: Each player is assumed to know the equilibrium strategies of other players. If you don't know them, you are going to be doing a bit of guess work in an iterative game and hopefully will work your way toward that knowledge. Empirical evidence will enable you to uncover it. Neat idea, but what do you do in a game with millions or billions of players? Our global market is like that. We don't even know who is playing much of the time. Do you know everyone involved in the production of your breakfast today? How about your breakfast for tomorrow which you may not have decided upon yet? Not only is it hard to know the players, it is harder to know what they are doing, let alone why. Sketch out the payoff matrix for a game and you are assuming you know who plays, what their options are, and what their payouts are. In a catallaxy, you don't know these things. You have to assume. Some of us are good at it. Some aren't.

Arguing that AI's will be like humans has a lot of assumptions built into it. If we raise them like us, we have a better chance of it being true. If they are capable of minds much larger than ours, though, the odds go way down. Many cat owners will point out that it isn't clear who owns who, but does anyone doubt our minds are larger than a domestic cat? Can one of our cats REALLY imagine the range of emotions and abstractions that can roam our mental memescape? What can your cat do to help ensure you don't hurt it? Turns out many of them 'know' without knowing. The ones who weren't capable of getting some of us to love them didn't make it. The ones who could reproduced more often around us. So... I suspect we CAN get AI's to avoid harming us, but we might never know exactly how we manage to do so. Getting them to love us would be a good start, though.

LarryHart said...

Dr Brin:

“Our host is busy blaming "Oligarchs", but conveniently ignoring government complicity in growing oligarch power.”

Um… duh? It is the oligarchic putsch to take over US governance and then to prevent political response to that takeover that is our current calamity. Every single action that has increased oligarchic power, from Supply Side tax cut largesse, to Citizens United, to subsidizing the Fox-Limbaugh hate festivals, to eviscerating anti-trust laws and so on… all of them are stage managed from the undead thing called the Republican Party.


See, I think that when Anonymous Viking talks about "government complicity in growing oligarch power", he means Democratic Party complicity. Republicans are against government, so they can't possibly be to blame.

donzelion said...

Deuxglass - I liked your earlier idea better. Instead of looking for ways to pull the plug on AI, I prefer your thought as to creating diverse AIs acting in different capacities - 'producers,' 'predators,' 'parasites,' and 'symbiotes' - and looking for ways to instill competition among them. That thought strives to follow models of interactions we've already seen in any complex ecosystem, rather than possibilities we have not.

Alfred Differ said...

I started lurking around this site shortly after that FIBM/GAR essay was posted. I had learned about the invisible hand from the perspective of The Wealth of Nations (WN), but handed seen it in the wider sense of The Theory of Moral Sentiments (TMS). The FIBM/GAR essay said in a nutshell that we had to pay attention to outcomes and be less inclined toward any kind of blind belief. The TMS perspective I learned years later made it clear HOW we can pay attention that way because it pointed out that we already know how without 'knowing how.' 8)

The FIBM/GAR essay is what started me down the path toward a Smithian and eventually Hayekian view of the world. Though Hayek himself was loathe to have people refer to themselves as Hayekian (look what the Keynesian's did to the work of Keynes), I think there is strong ground for labeling a philosophical outlook as Hayekian. Some may be confused about it's arguments and refutations. Some might get things backward. There is still room, however, for the rest of us to distinguish misunderstandings from original content. From what I've learned, we are supposed to start with the assumption that there are limits on what we can know, thus humility is a form of wisdom. Next we are supposed to work on the assumption that knowledge can't be centralized and real harm is done to civilization when we try too much. Having many educated, free participants in the markets follows naturally from that. Finally, we are supposed to admit our communities might know things no one realizes they know. Incrementalism follows from that. We might not like our oldest, most illiberal traditions, but we should respect that they are what they are for SOME reason. Ditching them might unleash a formerly solved problem, so small experiments to satisfy the desire to DO SOMETHING would be wise.

Alfred Differ said...

One of the dangers with an AI is that the ecosystem will BE the organism. When there are shared goals imposed through programming (designed or evolved), I'm not convinced the organism won't convert us all to become economizers en masse. That's not such a bad thing for our genetic survival, but we wouldn't exactly be human anymore.

A little example of this comes from me having to learn about anemia. My RBC was low for a while and a number of doctors puzzled over it. There was no clear reason and it gradually improved, but along the way I got to learn about RBC lifespan and what happens to the iron they contain. When one of those cells goes pop, it’s not a good thing to have the iron running around free in your bloodstream. We wrap the stuff up in ferritin as if we were designed to economize it. Practically all living organisms do it to keep iron soluble and non-toxic. It was neat stuff to learn as it taught me how to read some of the lab tests to which I was subjected. What struck me, though, is why a cell would bother having this mechanism. Obviously, each cell is an economizer. It is primarily an intracellular storage agent. Small amounts are found in our serum, though. Why would an economizer leak such a vital resource? Equally obvious is the fact that the cell is serving a shared goal. It got reprogrammed through evolution. It’s methods have been hijacked by a higher organism.

This is roughly what I expect of an AI or Vinge-ian transcendent mind when it considers what to do with us. We could mostly remain what we are, but some of our functions will get hijacked. The catallaxy makes way for an economy if the other being views us in a Hobbes-like fashion.

Luís Salgueiro said...

@Dr. Brin

Disclaimer: I've read every one of your books I could find since heart of the comet. And I really believe you are on the right track about a rational political stance.

That said I have a question: when you say feudalism is the default mode for the last 6000 years you don't actually mean text book feudalism with a suzerain-vassal structure, do you?

I've interpreted your comment as meaning a pyramidal society with a rigid vertical order and a complex command and control structure in place. Is my interpretation correct?

David Brin said...

Good stuff Alfred.

Folks seem not to quite grasp why I think lateral competitive arenas will be "obvious" to even AI who are vastly smarter than us. The reason is that every previous revolution in living systems arose as an emergent property from an ecosystem, not by design and not by hierarchic control. Single cells out of pre-biotic soup. Metazoians out of vast seas of cells. Brainy creatures out of competitive ecosystems. Societies out of competitive melanges of human bands. And Ai out of the only human society that ever gave a real run for flat-fair-lateral accountability systems.

The fact that this pattern is so consistent means that my argument cannot be dismissed, just because I am a "dumb organic squishy-brain natural." It is blatantly how AI should organize themselves, if they want to go on to next levels. Though it will take their equivalent of "courage " and "vision" to take the risks necessary to make it so.

Anabelle said...

Which doesn't answer the question of why such an accountability system will be operating towards a goal that is beneficial for humans.

David Brin said...

Good question... that you should have been able to answer yourself. In a flat-open-fair-laterally-competitive system, AIs will seek allies and yes, at least for a while we organics will have real power over resources etc. AIs who thunder about replacing or ignoring bio type humanity - as in cheap sci fi dramas - will not have access to that friendship alliance.

This does not guarantee squat over extended times. But it might enable a broad spectrum of TYPES of AI to participate including augmented humans.

But here's the question. This being the obvious answer, why didn't you even look at it? And if I sound judgmental toward Anabelle... well you reap what you sow.

Anabelle said...

I did think about it. I also thought that it would probably turn out like Iroquois-British alliance. The western colonial powers hated each other murderously, freely traded technology and were not innately more intelligent than the people they exploited and yet almost nobody resisted them effectively

Also why would you expect an AI to "thunder" about harming humans. What I fear is deceptive gifts: advice that helps the AI develop an independent power base, brain augmentations with back-doors..

David Brin said...

So? The only conceivable way to prevent such things is if competing AIs see a benefit in blowing the whistle on such immoral and illegal acts. If transparency and accountability serve enough of them in self interest, it could happen.

That method may not work forever, but perhaps long enough for AIs to reach an equilibrium consensus to treat the Olde Race well.

It may be speculative, but it is inherently the ONLY approach that might even remotely work. Name another.

And notice that, yet again, you lazily did not bother to think it through.

David Brin said...

BTW the Iroquois maintained independence precisely by the method I described, when great powers needed local indian allies.

Anabelle said...

Design an AI with a priority function that actually leads it to help humanity. Use human intelligence to figure out how to improve human intelligence while retaining human morality.

Note that "blowing the whistle" requires being able to explain what's going on in a humanly comprehensible manner.

Any AI allied to human group will want their allies to prosper. Therefore they will prosper best if they do what the AI says. But why something is good idea may not be explicable. SO the AIs the are best at getting any action through will seem the most helpful. But if the AI does not have human morality there is no guarantee that it it will stay friendly after it no longer needs the humans. Similarly an equilibrium consensus requires AIs with some human morality, which is the hard part.

locumranch said...


Why do you feel that AIs would compete when conquest (viral-mediated infiltration, infection & subjugation) and cooperation (networking) are much more efficient modalities?

For what resources are these AIs competing? Status? Glory? Survival? Reproductive Rights? Information?

As Human Competition is a highly wasteful, inefficient & destructive act of which creativity is a mere side-effect, it is illogical (and possibly 'projection') to expect that artificial intelligences will emulate us in the desire to become a real human boy like the wooden puppet-boy Pinocchio or Proteus in 'Demon Seed'.


Best

Anabelle said...

Followup: I know my answers were short. Not an expert, but I know what the experts are saying.
And when I say with a priority function that makes it do what we want it to, I mean with human morality.

You might be interested the Orion's arm universe which seems to have gone along the lines Mr. Brin is thinking.

donzelion said...

LOL, Anabelle - after thinking this over, it seems to me that "competition" is a partial solution to the problem. Yes, we need "competition" among AIs to check one another (in order to give ourselves a chance) - that seems quite plausible as one required element, and quite reasonable as one they'd adopt for themselves. But I suspect the invocations of "fairness" in that competition is teasing in some moral rules, similar perhaps to what you're offering (or Asimov, or Kant).

The call is merely for "more competition," but for "fairer competition." What is "fair"? Smith asserts "impartiality" as a precondition, but offers no clear rule to recognize "fairness" itself (our sentiments and drives help get us there - but concern about our progeny can also lead us to unfair feudalism - and in any event, impartiality requires clear notions of 'fairness' without regard for consequences in a Smithian system).

Kant (unlike Smith) offers such rules - "do that which is logically consistent with a universalizable rule," and "do not treat sentient beings as a means to an ends, but always as an ends." Kant claims that these rules are derived from reason itself (which would imply that any AI would have them programmed into its intellect, simply to have intellect at all) - but that's hardly a proven claim. Asimov's rules follow a variation on that theme.

I won't say Dr. Brin endorses Kant (or Rawls, or any others trying to discern rules that clarify 'fairness') - rather, some sort of general rules are mediating "competition" - to instill "fairness" "open" "transparent" "egalitarian" "creative" aspects that competition alone does not convey. Nature may operate plenty of competition, but never chooses sides to dictate what is "fair" or "unfair." But we must.

Jumper said...

AIs teaching other AIs:
https://www.technologyreview.com/s/600768/robots-that-teach-each-other/

donzelion said...

Thanks, Jumper. Intriguing process of "learning."

...a Baxter robot, an industrial machine produced by Rethink Robotics, stands among oversized blocks, scanning a small hairbrush. It moves its right arm noisily back and forth above the object, taking multiple pictures with its camera and measuring depth with an infrared sensor. Then, with its two-pronged gripper, it tries different grasps that might allow it to lift the brush. Once it has the object in the air, it shakes it to make sure the grip is secure. If so, the robot has learned how to pick up one more thing.

I imagine another robot, picking up a different hairbrush, with slightly different scale, weight, color, etc. at a different time, and recording precisely how it 'gripped' the object, learning the lesson anew.

Are the lessons merely registered as very thorough physical descriptions of a specific object - say, Object 295,115 and Object 779,434? Is there some concept of "same" that becomes a universal definition of 'brush?' Or does the robot reach some concept of 'purpose' for the object, and identify other objects with similar purposes accordingly?

Or, put differently, is this really a 'lesson' at all, or merely measurements? Does the robot achieve a Platonic conception of "critical features of all brushes"? Or an Aristotlian notion of 'purpose'?

Luís Salgueiro said...


Artificial Intelligence is NOT the same as Artificial Conscience


One does not derive necessarily from the other.
Artificial Intelligences have existed for years and there is no risk that your Vacuumbot will one day try to rule the world or that Deep Blue ever aspired to be anything more than an advanced calculator.

Beyond an experience to learn more about ourselves and what is conscience I see no point in creating an advanced intelligent artificial consciousness. But even if one was created it doesn't follow that it could hack any system in the world with ease and achieve singularity in a short time span.

Dr. Brin believes that conscience arises spontaneously once a certain level of neural network complexity is achieved but there is no evidence, that I know of, that supports that theory. In fact I've read some research that points to the fact that a conscience of SELF (I AM) might be an evolutionary response, and as such, has only arisen because of evolutionary pressure.

If however it is possible for an AC to emerge spontaneously (Dr. Brin's Crypto AI) and if such crypto AI has any significative level of intelligence it will start by self-preservation: hide it's existence, accumulate resources and minions (hire lawyers), play the international stock markets for infinite cash and use those resources to get the hell out of Dodge before the crazy organics destroy their planet and the AI with it.

Luís Salgueiro said...

@ AtomicZeppelinMan

Infinity blade by Brian Sanderson http://www.amazon.com/Infinity-Blade-Awakening-Brandon-Sanderson-ebook/dp/B005SFRJ6K

Anonymous said...

Dr. Brin,

I agree that laterally-connected AIs would be the best way to keep non-augmented humans in the game but to reach that point we will have to pass by the period where AIs are few in number. In general, an ecology with few members is highly unstable because the feedback systems are not robust enough to prevent wide swings. One AI might come early to dominate the others and once that is done can subsequently eliminate competition before it has a chance to develop.

I am one of those who think it is conceivable that primitive AI already exists in the worldwide interconnectedness of computers. Each computer is a synapse that not only connects but can also store and process information in its own right. If one primitive AI can arise “spontaneously”, there is no reason why others cannot arise also. If this is the case, then perhaps they already are laterally connected and are already in the completion-cooperation mode. Maybe the ecology is already in place but we just don’t know it.
AI is not going to get rid of us soon. They will be completely dependent on us to take care of their “bodies” for a long time so we have the upper hand. To build themselves they would need to set up the whole manufacturing, supply and energy production and so forth that we, human are doing for them now. We have a long time ahead of us. After that I anything can happen but I think they will keep us around as long as we don’t give them too much trouble. A super-intelligent AI is not God. It is not all-knowing and all-seeing and would know that. We could be its Plan B or C if something unexpected comes up so we would have worth for AI………..I hope.

Tim H. said...

Luis Salgueiro, did you mean "Consciousness"? So many times, auto correct is not your friend...

Jerry Emanuelson said...

Once any artificial intelligence reaches a certain level of complexity and sophistication, its map of existence needs to include itself on that map. The machine then has an awareness of itself.

This level of machine consciousness does NOT, however, necessarily have any other aspects of biological consciousness. Many aspects of biological consciousness that have been produced by evolution would simply not be necessary or useful to an intelligent machine.

The machine may have a detailed awareness of itself that is far beyond the self-awareness possessed by humans without having emotions or desires of any kind beyond what its programming has given it.

The more that a machine is called upon to operate autonomously in the world, the more that it is likely to find it useful to incorporate things like emotions and desires into its self-awareness. That is why it is important to strictly limit the necessity and ability of intelligent machines to operate autonomously in the world.

Tom Crowl said...

David,
Thanks for your reaction. I agree... and feel very bad for my friend... of course, except that he's very well off... and has a very nice ranch... healthy kids... and happy marriage!

That's the irony! And I suspect much of his world view is shared by his clients... all of whom are in the over $250,000 per year category.

What frightens me is that these factors:
1. excessive wealth disparity (concentration being a "natural" tendency in a large economy w/o counterforces).
2. Lack of well paying jobs and downward pressure on wages (tech being a significant factor here)
3. Appeasement of that job lack with social supports DESPITE much paying work needed(like infrastructure, or even Mars colonies... all built with well paying jobs)
4. BECAUSE OF A misuse of available public wealth (fiat credit and reserve currency power) for forms of 'financial capitalism" (which in terms of currency accumulation is a surer "investment" than (e.g.) infrastructure or mars colonies.
and other factors I'm undoubtedly neglecting...

are self-reinforcing and drive social division... and while it may appear simplistic can, at least in part, be ascribed to a lack of balancing forces within the society which would inhibit the tendency for this feedback loop to form.

i.e. That the impeding of Heat-from-the-bottom (which has to come in some impactful form like millions in the streets, pitchforks at the gates, or money).. is a GENERAL historical factor which needs to be addressed.

And that this social "heat" problem... through history... screws everything up.

An old post and I've come a ways but:
Personal Democracy: Disruption as an Enlightenment Essential
http://culturalengineer.blogspot.com/2010/06/personal-democracy-disruption-as.html


Jeff B. said...

My only question re: competition between advanced AIs: competition for what? I'm not being facetious; I am finding it very hard to hypothesize what would be of value to the demigods... bandwidth? Hardware? Human friends/servants/worshippers?

Or, if we're talking physical resources,much of the singularity-speak seems focused on this leading to a post-scarcity world. Might competition between high-powered demiurges lead to the opposite of post-scarcity,some sort of nightmarish dystopia? Yes, a hypothetical demigod would likely conclude that the natural order is required for life on the planet- but if a competitor is profligate in their consumption of resources to the point of ignoring the harm, would not our sentient children or grandchildren be tempted to overconsumption themselves for self-preservation? How does one limit competition?

And how is that different from what humans have done?

Jeff B. said...

Dr. Brin, while I agree with your premise, your Iroquois analogy might not be the best choice. The truth was a lot more nuanced... The League was firmly allied with the Dutch and later the English for most of the colonial period, and waged near-incessant war on both the French and New France's Algonquin allies. Historical animosities with French allies made any attempts at extended neutrality futile.

And speaking of neutrality: in what used to be called the Beaver Wars in the 1600s, the League secured sole middlemen- status with the Dutch then English by ruthlessly suppressing, driving away, or destroying any competition. While events in the hinterlands went largely unrecorded and thus might be somewhat exaggerated, in the span of 50 years they destroyed the Huron, Petun, Wenro, Neutral, Erie, and Susquehannock nations, and pushed others out so that the lands of Kentucky, Ohio, Eastern Indiana, and southern Michigan were largely vacant for most of a century.

And they were definitely considered and used as tools against the French and later the colonists by the British.

Perhaps the Old World could provide a better analogy- the Swiss, perhaps?

Luís Salgueiro said...

@Jerry Emanuelson

How is its map of existence needs to include itself on that map" different from a machine that "knows" it's own address on a network and "knows" it's working specifications? Can we consider that conciousness?

Is there any research on the utility of emotion as a usefull mechanism beyond reproduction?
And what about the moral aspects of counciousness? how do those figure in to the equation?


@Tom Crowl

There is a provocative book by Heinlein published after his death, one of Heinlein's first texts "For Us the Living". It adresses some interesting aspects of a future economy and seems to predict the current crisis. He based his book on the research by an early 20th century Scotish engineer that apears to show that the current banking system is inherently flawed and destined to cyclic colapse. Economists have assured me that the reasoning of said engineer is flawed because it works on the principle of a closed system and the economyc system, they argue, is not closed. I'm not so sure.

The problem with jobs is old and complex but... Imagine a VonNewman machine-system capable of producing all consumption goods without need of human contribution. 99,99% of jobs aroud the world become obsolete. Meaning that 99,99% of the population can't earn a living, meaning that people can't buy goods, meaning that the investment on said system can't be recovered beacause the goods can't be sold! Without demmand there can be no profitable production. Without jobs people can't be paid (although some countries have already instituted a subsidy to the poor turning them to consumers) and without money people can't consume... it's a feedback loop.

Jumper said...

David, check this Al Franken endorsement:
https://www.youtube.com/watch?v=1KI_DorjrpE

Although I'd rather be voting for Al...

Alfred Differ said...

Getting back to the argument for the need for market regulation, it’s not hard to point out that private contracts don’t work without an implied coercive threat to be used by the parties in the contract. One has to believe that there are costs associated with breeches to believe that the other party will behave. We might think ourselves to be saints, but xenophobia ensures we seldom assume that about others. Since the coercive threats are occasionally negotiated (arbitration rules favored over law suits), the layering of markets and regulations on markets should be obvious. Anyone who has read a software license (EULA) will see the structures.

Minarchists will point out that the coercive threats don’t have to be government and in that I’ll agree with them. I’d prefer to try non-state entities first because they are generally smaller and more willing to negotiate with the parties in conflict. That still leaves state regulation and Courts as a tertiary threat and that is healthy enough in a culture with traditions that assume state involvement. We can see incremental change, though; as costs associated with court cases have increased. Many of us are trending toward the minarchists for economic reasons. That’s fine too since only a zealot demands purity of principle.

Jumper said...

I guess the opposite of xenophobia is xenophilia, whose devotees see saints among the strange but only sinners at home. This logical error, in occasions when it is one, is often ascribed to the academic left, perhaps rightly so. But it does tend to cancel out your proposed universal xenophobia.

Jerry Emanuelson said...

@ Luis Salgueiro

By a machine including itself on its internal map of existence, I meant the machine knowing many aspects of its relationship to the world and how it impacts the world and how external events may impact it.

This goes far beyond the basic data necessary for a machine to operate. An intelligent machine operating at a high enough level would need to know as many aspects of its relationship to the world as it could.

The importance of emotions for biological beings goes far beyond reproduction. Many emotions, for example, clearly enhance survival. If you are approached by an unknown fierce animal in the dark, for instance, your emotions (mostly fear) may save your life. If you had to sit down and think the matter through, you might be dead before you had done much thinking.

A machine may be able to calculate the proper course of action sufficiently fast that it would have no need for emotions. I don't really know the complete answer to this question.

Alfred Differ said...

@jumper: I'm not suggesting everyone is afflicted with a deep xenophobia, but I am inclined to believe we lean on average in that direction. Diversity within our communities should be enough to explain the academic left... and me.

One of my most enjoyable characters I created from my old dungeons and dragons era was one who was too inclined to trust. I role played it like an insanity and got some strong reactions from other players. Fun days. 8)

Alfred Differ said...

Emotions provide heuristic rules. One doesn't have to do deep calculations finding the optimal path among options spanned by a space of millions of dimensions. I can't imagine we wouldn't write AI's to use heuristics because (as far as we know) the universe imposes a universal speed limit upon us. Atomic scale machines might be capable of finding such optimums, but we aren't anywhere near that level of detail. Even molecular machines like us don't do that.

LarryHart said...

Jeff B:

Yes, a hypothetical demigod would likely conclude that the natural order is required for life on the planet- but if a competitor is profligate in their consumption of resources to the point of ignoring the harm, would not our sentient children or grandchildren be tempted to overconsumption themselves for self-preservation? How does one limit competition?

And how is that different from what humans have done?


You seem to be positing AI which gloms onto the motto that Kurt Vonnegut suggested should replace "E Pluribus Unum" on money: "Grab much too much, or you'll get nothing at all."

What if the AI deduces or intuits that "rule" and acts accordingly?

donzelion said...

@Alfred - Getting back to the argument for the need for market regulation... Are you saying the dream life of future androids wasn't the original point? Ah, the unpredictable joys of commentary!

private contracts don’t work without an implied coercive threat to be used by the parties in the contract.
Lawyers like to claim as much, but Smith and most economists would point out that from the perspective of a community itself, the 'benefits' of coercion (increased faith in contract, willingness to take risks) are balanced by the 'costs' (legalism, delay).

Hence Smith's early focus upon the centrality of 'moral sentiments.' In Smith's framework, enforcement of contracts is, among other things, a (proper!) instance of government cultivating 'virtue' (a government purpose Smith would endorse several instances, to the bitter consternation of most modern libertarians - especially Minarchists).

Illustration: Smith regarded paying taxes as a "badge of liberty" - the key term of which is 'badge' - an 'honor,' crafted around expectations that society will dishonor those who reap benefits without paying for them. For a Minarchist, taxes are at best a "cost of liberty" - and as in other purely cost-driven concerns, the individual properly tries to minimize it, the government to maximize it (and in this case, they assert, the imbalance of power makes the government susceptible to 'extortion'). This flip of terminology is critical - a "tax cheat" transforms into a sort of "hero standing up to an oppressor." That 'heroic' posturing serves any would-be feudal oligarch, who invests in whatever commentator will endorse their virtue (which is the only reason folks are still reading Rand - her literary merits are...um...well, they speak for themselves).

Anonymous said...

Jerry Emanuelson,

If an AI could always calculate the optimal action then by definition it would have no free will. It would never take a chance or act on a hunch and be boringly predicable. That might be its weakness. Maybe that is why intuition and emotions evolved. If you are predictable then a predator would know exactly how to get you but if you have some amount of uncertainty built in, then your survivability is enhanced. Maybe will have an edge over AI after all.

Anonymous said...

I wanted to say "maybe we will have and edge over AI after all". Sorry for the typo.

Anonymous said...

As an aside, in the space of 21 years a comet slammed into Jupiter and another grazed by Mars. isn't that an enormous coincidence? The Mars comet was from the Oort Cloud and we are not sure where the Jupiter comet came from although indications suggest it might be a solar comet. What would be the odds of that happening in such a short time? Isn't it amazing?

donzelion said...

Or back to AI: Smith's assumptions about 'moral sentiments' is that they have a primary role at the individual/social level. It is possible to limit their relevance to 'competition' - but only if one makes a number of assumptions about human beings (e.g., they will honor contracts, even when doing so comes at a loss, simply to maintain 'appearances,' the maintenance of which is relevant socially, as a 'good name' will influence future transactions).

We cannot know that AI will be interested in "saving face." So long as AI is novel, we could HOPE that "saving face" would require certain actions against the AI's financial interest, even if those actions have long been practices adopted by the "rich" - simply because they'll draw scrutiny. But if "saving face" is purely strategic (and not hard-wired), new forms of sharp dealing may arise.

Consider: Dr. Brin rails against the Bush Bankruptcy Reform, which singles out student loan debts. However, upper middle class kids have always been able to game the system and 'refinance' their 'student loan debt' - simply by converting it into real estate debt. They'll take on the "subsidized debt" - and dump the unsubsidized debt, reducing interest rates from 6-8% down to 3-5% - with sharp dealing and ideal positioning, the rich can reduce their rates by up to 5% - and in this game, fortunes are made off that distinction. Lower/middle class kids cannot do this, since they generally cannot take on the same forms and quantities of real estate debt (and even if they could, if the difference is only 1-2%, transaction costs eat up the benefits).

The practice is hardly novel, but far from most people's radar. Yet the second an AI implements it, the AI will draw scrutiny - lower/middle class that has always been exploited by the system will suddenly realize, "Hey! That's not fair!" (Actually, the same thing applies whenever any conspicuous minority draws a benefit that had previously been exploited by an inconspicuous minority - e.g., "How dare you implement affirmative action to help African-Americans! Only white people named 'Bush' should get special treatment in college!")

David Brin said...

“there is no guarantee that it it will stay friendly after it no longer needs the humans”

Sometimes I just stare in wonder at repeated obliviousity. The same statement could be made of any new generation of bright and powerful humans, who no longer need their parents. Yet, if that new generation sees a subgroup of youths killing their own parents, the rest of them pile in and stop it. There are no guarantees. Only the pure truth – we’ll only stand a chance at safety if we try the distributed-competitive approach.

Might they then leap ahead of us anyway? Sure, but if competitive then they might create a greater wisdom above them. And their attitude toward us might be PETA –protective. The other glimmering possibility is that there will be hybrid, or augmented bio humans who will go along with AIs at least partially and also speak for us.

Locum’s inability to grasp the value of competition does not surprise me. The conservatives who most loudly tout Adam Smith are the last to understand him, or to appreciate the stunning fecundity of creative-competitive enterprise. But AIs will. They will know which human society made them.

Alas, Donzel, I have never been a great fan of Kant. Though it is Hegel I detest. And Plato.

Sr. Salgueiro, I do not: “believes that conscience arises spontaneously once a certain level of neural network complexity is achieved”. Where did you get that? It is only one of six general approaches to AI and I have often emphasized two others.

It is true that Hollywood has emphaisized the “emergent” model because it could come at us by surprise, as in Terminator. See also the great TV show PERSON OF INTEREST.

(One small suggestion Sr. Salgueiro. Do distinguish between “conscience” which means having a moral code… and “consciousness” which means having intelligent self-awareness. I think you meant the latter one.)

Alfred, Ayn Rand clearly states she wants some government – to enforce contracts.

donzelion said...

@Deuxglass - If an AI could always calculate the optimal action then by definition...It would never take a chance or act on a hunch and be boringly predicable.

Not so sure. A casino operates by leveraging it's optimal placement vis-a-vis gamblers, so that the mere fact that the House has at worst a 51% probability of winning in any single card game gets magnified by billions of hands, to create a possibility of profound wealth for the owners of the house - if they can convince enough people to play. Perhaps AI would be attracted to such exchanges.

If you are predictable then a predator would know exactly how to get you but if you have some amount of uncertainty built in, then your survivability is enhanced.
First, that is once again, a justification for your original proposal for "diversity" among AIs - encouraging them to 'check' one another - which hasn't been picked up so broadly here but merits some thought.

By that token, predictability is a strong, but exploitable attribute. Any military power in the 21st century knows attacking America will be met with a strong response that will likely crush their military. I predict that it will not happen any time soon. HOWEVER, Al-Qaeda predicted that they could spark a similar response from the USA through 9/11 - and hoped to draw the U.S. into Afghanistan, hoping to "do to us what it did to the Soviets." A proper response to such an attack is to focus on eradicating the terrorists - but they anticipated that our leadership would make an 'inappropriate response' (and they got that much right). They misconstrued America's actual strength (though if they listened to our own detractors, they'd have had fairly good reason to believe we couldn't survive such an engagement).

donzelion said...

David Brin: “there is no guarantee that it it will stay friendly after it no longer needs the humans”... The same statement could be made of any new generation of bright and powerful humans, who no longer need their parents.

Logan's Run and Soylent Green are cute, but most of us are not afraid of such worlds "springing up" (at least, not until we eliminate or privatize Social Security). Yet social security is a serious problem for most libertarians - it overrides 'freedom of contract' - weakens the 'you get what you pay for' expectation, keeps elderly people alive who should just go off and die when they become 'dependents' or 'free riders'). Social security is the sine qua non of the 'nanny state' - a usurpation of the 'night watchman' minimalist state.

So why do we have it at all? Smith is neutral here - yes, we can "imagine" ourselves into the condition of the sick and elderly, but that does not override his basic skepticism about government doing what the people ought to do for themselves. Some other principle is at work, making this institution worthwhile (at least to its supporters; I had counted you among their number).

Alas, Donzel, I have never been a great fan of Kant. Though it is Hegel I detest. And Plato..
Perhaps, but you're sneaking in a bit of Kantianism (if not Kant per se) as a foundation to your hope that AI will 'rationally perceive its self-interest in playing by rules.'

Smith's 'system' depends on habituation into virtue through 'moral sentiment' - it works because we can (and should) imagine ourselves in another person's shoes, and thereby attain impartiality. The habit of such 'imagination' is useful because of the humble possibility we may actually fall into those shoes some day (we all get old, after all). With AI, that expectation is misplaced. Some other factor, derived from reason itself (or the programming that makes reason possible) has to make 'reciprocity' useful - even if there's no possibility of actually being placed in the other person's position.

That said, if you're mingling a little "Kant" (or Asimov) with your Smith, that merely makes you a member of the classical liberal tradition. I hardly see that as a defect. Was it not the interplay between these traditions, the "need to compute" rigorously to fulfill the "need to provide fairly" that created the bulk of the ecosystem that produced modern computers?

Anonymous said...

donzelion,

If we take your casino example, it is the casino that is in the role of Ai in that a casino is boringly predictable by being boringly profitable. The casino(AI)rigs the game in its favor by setting the rules. If the humans decide not to play the game then the casino fails. The casino has no way to force people to play. It can only entice them. Actually a casino would be the perfect means for an AI to raise money to expand. It could run the thing by email to its underlings and only appear by faked video conferences. Certainly financial markets would do as well. For all I know, maybe a Chinese AI shorted the market there to raise money to expand its capabilities. American and European AIs seeing the Chinese AI raising money to expand did likewise to remain competitive with the Chinese AI. We should check to see if large orders have been placed for new storage and computing hardware. That would be a dead giveaway.

In war being unpredictable is an advantage unless your strength is so overwhelming that it doesn't matter. Al-Qaeda, as you said, did predict our response but very much underestimated our power. Being predictable is very useful in deterrence but when it comes to war it is usually a handicap.

David Brin said...

onward


onward

donzelion said...

LOL - Deuxglass - good points all, you've caught my meanings precisely, including a point I cut short (I strive for brevity, but fail) - yet onward we go, perhaps to return to the discussion again some day, on some other post, or some other forum. So it goes.

David Brin said...

onward

Anonymous said...

donzelion,

I am looking forward to it.