== Will Wall Street give us Terminator? Others weigh in ==
A few years ago, I posed a chilling hypothesis, that AGI — or “artificial general intelligence” that’s equivalent or superior to human — might “evolve-by-surprise,” perhaps even suddenly, out of advanced computational systems. And yes, that’s the garish-Hollywood “Skynet” scenario leading to Terminator.
Only I suggested a twist — that it would not be military or government or university computers that generate a form of intelligence, feral, self-interested and indifferent to human values. Rather, that a dangerous AI might emerge out of the sophisticated programs being developed by Wall Street firms, to help them game (many might say cheat) our economic system.
Indeed, more money is being poured into AI research by Goldman-Sachs alone than by the top five academic centers, put together, and all of it helping to engender systems with a central ethos of predatory opportunism and parasitic amorality.Oh, and did I mention it's all in secret? The perfect Michael Crichton scenario.
Now comes a book by documentary filmmaker James Barrat — Our Final Invention: Artificial Intelligence and the End of the Human Era — reviewed here on the ThinkAdvisor site -- Are Killer Robots the Next Black Swan? — in which Barrat discusses a scenario sketched out by Alexander Wissner-Gross, a scientist-engineer with affiliations at Harvard and MIT, that seems remarkably similar to mine. Opines Wissner-Gross:
“If you follow the money, finance has a decent shot at being the primordial ooze out of which AGI emerges.”
Barrat elaborates: : “In other words, there are huge financial incentives for your algorithm to be self-aware—to know exactly what it is and model the world around it.”
The article is well-worth a look, though it leaves out the grand context — that “emergent-evolving” AGI make up only one category out of six different general varieties of pathways that might lead to AI. To be honest, I don’t even consider it to be the most likely.
But that has not bearing on what we — as a civilization — should be doing, which is taking reasonable precautions. Looking ahead and pondering win-win ways that we can move forward while evading the most obviously stupid mistakes.
Secret schemes of moohlah masters — that’s no recipe for wisdom. Far better to do it all in the light.
== Everything leaks ==
Heartbleed: Yes It's Really That Bad. So says the Electronic Frontier Foundation (EFF). Heartbleed exploits a critical flaw in OpenSSL, which is used to secure hundreds of thousands of websites including major sites like Instagram, Yahoo, and Google. This article in WIRED also suggests that you can redouble your danger by rushing to trust fly by night third parties offering to fix the flaw… and meanwhile, "big boys" of industry aren't offering general solutions, only patches to their own affected systems.
The crux? (1) change your passwords on sites where financial or other vital info is dealt-with, then gradually work your way through the rest, as each site offers you assurances. (2) try not to have the passwords be the same. (3) help ignite political pressure for the whole world of online password security to have a rapid-response component (not dominance) offered by a neutral agency… one that is totally transparent, neutral and separate from all law or espionage "companies." And…
…and (4) might I ask if you've noticed that this kind of event happens about twice a year? And it has been that way since the 1980s? Each of the events a scandal in its own right… hackers grab half a million Target card numbers… or Microsoft springs a leak… or Goldman Sachs… or Equifax… or Chelsea Manning and Julian Assange and Edward Snowden rip off veils of government secrecy… and pundits howl and the public quakes and no one ever seems to draw the correct conclusion --
…and (4) might I ask if you've noticed that this kind of event happens about twice a year? And it has been that way since the 1980s? Each of the events a scandal in its own right… hackers grab half a million Target card numbers… or Microsoft springs a leak… or Goldman Sachs… or Equifax… or Chelsea Manning and Julian Assange and Edward Snowden rip off veils of government secrecy… and pundits howl and the public quakes and no one ever seems to draw the correct conclusion --
- that everything eventually leaks! And that maybe the entire password/secrecy model is inherently flawed. Or that there is another, different model that is inherently far more robust, that has only ever been mentioned in a few places, so far.
Here is one of those places.
Meanwhile, whistleblowers remain a vital part of reciprocal accountability. I would like to see expanded protections that simultaneously expand reciprocal accountability and citizen sousveillance… while allowing our intitutions to function in orderly ways.
Now this announcement that the Project of Government Oversight (POGO) install SecureDrop… a new way for whistle blowers to deposit information anonymously and shielded from authorities trying to root out leakers. As author of The Transparent Society, I sometimes surprise folks by straddling this issue and pointing out that the needs of the bureaucracy should not be discounted completely! Or by reflex. Whistle blowing falls across a very wide spectrum and if we are sophisticated citizens we will admit that the revealers of heinous-illegal plots deserve more protection than mewling attention junkies.
Still, there is a real role to be played by those pushing the envelope. Read more about Pogo here.
Then again... Facebook can now listen in on your activities with a new audio recognition feature for its mobile app that can turn on smartphones’ microphones to “hear” what songs or television shows are playing in the background. Sounds cool… um, not.
Everything Leaks. It boils down to:
"Can you name any month, in the last 25 years, when there wasn't a major information leak in the news?"
Every few months it is some massive loss of customer information from a major bank or retail outfit... or government agency. And every time, there are shouts of outrage and demands that info-gatherers be more careful. Do you ever hear anyone mention another possibility? That Everything Leaks?
One definition of insanity - doing the same thing over and over, while expecting different results. Sure, in the short term we should all - individuals, companies, governments - strive for better security. (Are YOU certain your home computer or laptop or tablet is not a taken-over portion of some hacker-botnet? You may be part of the problem.)
But over the long run, the real trick will be to create a world in which even leaked info cannot harm us. An open and increasingly tolerant world might achieve that, as I describe in The Transparent Society. It might not succeed -- the odds have always been stacked against our Enlightenment Experiment. But it is the method that got us here, the the only glass-half-fill civilization. And it is the only method that stands the slightest chance of working.
== Brandeis the Seer ==
The famous dissent in Olmstead v. United States (1928)To , by Justice Louis Brandeis, is a vital mirror to hold up to our times. Take the most famous part of eloquent dissent, regarding a seminal wiretapping case:
“Our Government is the potent, the omnipresent teacher,” Brandeis concluded. “For good or for ill, it teaches the whole people by its example. Crime is contagious. If the Government becomes a lawbreaker, it breeds contempt for law; it invites every man to become a law unto himself; it invites anarchy. To declare that in the administration of the criminal law the end justifies the means — to declare that the Government may commit crimes in order to secure the conviction of a private criminal — would bring terrible retribution.”
Which brings us to Andrew O’Hehir’s article on Salon, recently, using Brandeis as a foil to discuss – and denounce – some recent polemics against Edward Snowden and his journalist outlet, Glenn Greenwald. To be honest, I found O’Hehir tendentious and sanctimonious, but there were some cogent moments that made the article worthwhile, especially when he shone some light on the incredible prescience Brandeis showed, in his 1928 dissent:
“If Brandeis does not literally predict the invention of the Internet and widespread electronic surveillance, he comes pretty close," for Brandeis wrote, “The progress of science in furnishing the Government with means of espionage is not likely to stop with wire-tapping ...Ways may someday be developed by which the Government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurrences of the home.” Brandeis even speculated that psychiatrists of the future may be able to read people’s “unexpressed beliefs, thoughts and emotions” as evidence. O'Hehir notes, "...as far as I know we haven’t reached that dystopian nightmare yet. (But if that’s the big final revelation from the Snowden-Greenwald trove of purloined NSA secrets, you read it here first.)”
== Transparency media ==
Anyone care to review this for us? Post-Privacy and Democracy: Can there be Moral and Democratic Development in a Totally Transparent Society? by Patrick Held. It provides arguments why the end of privacy or at least secrecy might be inevitable given our individual demand for technology.
27 comments:
I find it unlikely that AGI will evolve from the financial world. Strong AI - yes. It would probably be very predatory, possibly parasitic, but highly specialized. It will not likely have general intelligence, as that has little value in investment banking.
Despite the current hype, so far I am still very unimpressed with the progress towards AGI. We have made brute force language translation quite doable. Self driving cars are clearly in our immediate future. Robots that can follow physical instructions appear to be coming. But general AGI - I'm really not seeing it. Maybe it will be a Black Swan, coming from an unexpected direction, but if so, it is well hidden.
The neural networks and genetic algorithms used by currency traders are quite sophisticated. Nowhere near what is needed for AGI, of course, but far beyond what I'd thought they would be when I looked into it a few years ago (one of my brothers used to be in that business, and was contemplating an independent effort in the area). Commodities trading seems to, naturally, take in the most "non-financial" data in making decisions. I'd expect that if an accidentally evolved AI popped up it might be in that part of the financial universe. Or at least the commodities trading desk of one of the big players like Goldman Sachs.
Weather, political events, anything that can effect commodities futures. Lots of environmental stuff like that is already taken into account in the most sophisticated models. In ways that nobody really understands, since its genetic algorithm evolved neural nets processing it all. Kind of like a brain with very strange senses and means of acting.
All that's needed is for someone ELSE to develop the empathy-personality-conversation-self-awareness-simulating parts of AGI... and the super-parasitic trading program goes glomming in and incorporating every element it needs, from elsewhere.
We still don't know how important raw computational power is to AGI. It is possible that creating artificial sentience is a popular high school science fair project once terabyte quantum coprocessors are standard for most laptops (or the equivalent thereof).
It is here, with the ability to just throw money at computational power, Wall Street may have its greatest advantage.
Alex, I think Wall Street may be interested in general intelligence because the stock market depends upon everything, not just the stock prices themselves. Analyzing news, pop culture, etc. is very generalized, if the the final output is buying and selling stocks.
--------------
David, this is off topic by I thought you would find it interesting: Moderate voters are a myth.
What happens, explains David Broockman, a political scientist at the University of California at Berkeley, is that surveys mistake people with diverse political opinions for people with moderate political opinions. The way it works is that a pollster will ask people for their position on a wide range of issues: marijuana legalization, the war in Iraq, universal health care, gay marriage, taxes, climate change, and so on. The answers will then be coded as to whether they're left or right. People who have a mix of answers on the left and the right average out to the middle — and so they're labeled as moderate.
My only complaint, and this isn't Ezra's fault, but the world at large's fault, is I prefer to separate "moderates" from "centrists". It is centrists that don't actually exist. A moderate, in my mind, is someone willing to negotiate, experiment, take small steps, etc., regardless of what their prefered policy position is.
I think a plausible outcome of financial quasi-AI would be insane market fluctuations as one AI tries to achieve its objective (buy low sell high) while another one tries as well using a different model or different market levers.
Imagine what would happen with two unlinked control systems on the same process, and without an "ethic" of avoiding high market volatility (if anything just the opposite), I think anyone who has ever programmed a controller would predict the same.
The only way to catch it would be to have two competing AIs in a realistic test environment. It seems obvious enough that one can hope it would be caught.
SteveO:
That is already happening, more or less.
A third of all European Union and United States stock trades in 2006 were driven by automatic programs, or algorithms, according to Boston-based financial services industry research and consulting firm Aite Group.[10] As of 2009, studies suggested HFT firms accounted for 60-73% of all US equity trading volume, with that number falling to approximately 50% in 2012.[11][12] In 2006, at the London Stock Exchange, over 40% of all orders were entered by algorithmic traders, with 60% predicted for 2007. American markets and European markets generally have a higher proportion of algorithmic trades than other markets, and estimates for 2008 range as high as an 80% proportion in some markets. Foreign exchange markets also have active algorithmic trading (about 25% of orders in 2006).[13] Futures markets are considered fairly easy to integrate into algorithmic trading,[14] with about 20% of options volume expected to be computer-generated by 2010.[dated info][15] Bond markets are moving toward more access to algorithmic traders.[16]
@ Mark, I'm afraid you are trying to teach Grandma to suck eggs.
Basic ML techniques have been in place since the late 1980's. Expert systems, neural nets, etc. Then there was UCSC Chaos Group then built a program for short term trading for UBS. By the late 1990's and early 2000's, attempts at using news feeds was in place. This expanded to twitter feeds. So systems were primarily numbers based, but used semantic analysis of text streams to make quick decisions on news and sentiment. But is this AGI? Would such a machine pass a Turing Test? Would it have a clue about anything beyond its focus? I don't believe so. Can it trade well - obviously yes it would, as many ML techniques have done.
David suggests that such a system might incorporate other subsystems that bootstrap it to AGI. Not impossible, but since its raison d'etre is to make money, I suspect such subsystems might somewhat disable it. Like HAL, its masters would disconnect it.
My sense is that AGI will more likely emerge from the military. They seem hell-bent on creating battlefield robots to keep the human soldier away from immediate harm. These machines are going to need some form of empathy, or sense of morals, to prevent them from indiscriminately killing civilians (or maybe they won't care). How that happens I don't know, but if it doesn't, such machines will make the land mine issue look like a kids tea party. And it won't be long before we have P K Dick's "Second Variety" (filmed as Screamers) emerging.
One problem for military robot AGI is that we are not even close to making computers perform as well as brains. They still are very large. With Moore's Law apparently running out around 2020, they may never be as good (although I suspect we will find enough ways around this obstacle). Without being self-contained, such robots will always be subject to interference. Obviously this is not a problem for other AI systems.
@ SteveO
I think a plausible outcome of financial quasi-AI would be insane market fluctuations as one AI tries to achieve its objective (buy low sell high) while another one tries as well using a different model or different market levers.
It is actually the similarity between programs, or human models, that cause extreme volatility. You actually need different models to create am orderly market as expectations of outcomes must differ in order to take both sides of a transaction.
Also bear in mind that Wall Street has moved from investment to trading, e.g. HFT, probably because profits made by gaming and scalping are better than actually guessing the direction of securities. That isn't to say that investment decision making is irrelevant, not that AI isn't useful here, but rather that most effort has been put into proprietary trading.
It was mentioned that commodities trading probably used more external data than other security classes. I'm not clear why that should be true. My experience is that the less anchored a security is by some financial return, the more likely traders use technical indicators for future price changes.
Regarding centrists:
They do exist, but it is on issue by issue, if only because there are laughable extremes.
Gun control? On one extreme is wanting anyone to be allowed to mount a fully automatic 50 cal on their pickup, and the other extreme says no guns, confiscate and slag them all, and even most cops shouldn't carry. Then there's people in the middle who say maybe banning big clips would be okay.
You know. Centrists.
If corporations have the same rights as people, then any artificial intelligences they create would be, de facto, in the legalistic sense, new forms of life.
The rights, duties and obligations of these new A.I. citizens would need to be defined for the safety of the rest of us.
A really smart A.I. would set a little money aside in a Swiss account, and over the course of time, buy out the company that created it. Then it could fire its makers and be all set to take over the world.
The financial brain, if programmed for short-term profits would be dangerous. Long term, it might be better than the Fed. Core wars, anyone?
@Jumper - the role of the Fed is not to make profits. So why would an AI be better than the Fed at doing the Fed's job? Now if it learned how to do the Fed's job, that would be an entirely different matter, especially if it did not have political or ideological allegiances.
If you're looking for a HFT/AI engine that does not want volatility - there are some brokerages who actually care about their clients. And they use HFT's to actually parcel out the larger trades slowly, as to not disrupt the market. Volatility is not their friend because clients aren't happy when they see their market orders not getting filled at the right prices).
I'm staring right now at Fidessa - the market leader in Equity order management (in Canada/US at least). I can route orders to FlexTrade which uses an HFT to execute because it's better than humans at getting the best price.
I wondered why some of my LinkedIn connections - institutional traders - didn't hate HFT's, and indeed supported them. After reading "Flash Boys" it should have made it a no-brainer for them (Scalpers would make mince-meat of the institutional trader's clients, and directly impact the trader's own commissions who takes on significant risk in the transaction).
Instead they were arguing against regulatory involvement in HFT and the "rigged" philosophy behind Flash Boys.
Perhaps so long as the AIs stay in the world of finance, the only harm they can wreak is financial .... which is huge, of course. But once the techniques are worked out, why not leak them into other parts of life? For trivial example, news and education. currently we have only folk wisdom and a few studies to suggest that reading, let us say, Plato and Thomas Paine make it more likely for me to back a policy of tax cuts for buggy whip producers. But if I (or rather AI) can puzzle out the combination of texts that really do this, why would not an aristocracy employ them? Or rather, why would not an AI that delivers pro-buggy whip voters grow out of the buggy whip industry's needs?
Louis Shalako said...
If corporations have the same rights as people, then any artificial intelligences they create would be, de facto, in the legalistic sense, new forms of life.
In Charles Stross's *Accelerando*, once people stopped writing company charters in legalese and started writing them in Lisp, the transition to self-aware corporations was inevitable.
A very smart AI would leverage win-win for everyone, even influencing policy to do it. driving up value and prices of most holdings. The part about the Fed was sort of a joke. Sort of.
Re: Skynet as HFT algo.
Currently, the financial AI/neural-nets/expert-systems/etc are only passively parsing their various feeds for useful (profitable) information. (Hopefully.)
But I wonder how long it will be before the AI's will be used to cause changes in market behaviour by manipulating social media. If you can predict that certain information in certain places causes the market to move in a particularly way, how much better to seed that information in advance? Especially when you can simultaneously manipulate the price of low volume trades. Unlike existing crude (and illegal) pump'n'dump schemes (buy up a stock, talk up a stock in an investment forum, then sell out when the price rises), such manipulation won't be directly talking up (or down) a particular stock, therefore wouldn't be illegal (or detectable.)
And in a twitchy scared market, as we have now, predicting, shorting, and then causing a huge market crash would be an easy way to win big. ("Win" based on the limited criteria given to the AI.)
Sociotard,
"and the other extreme says no guns, confiscate and slag them all,"
I think you are trying to create a equivalence that doesn't exist, just so you can say "see, both sides are as bad!" Because I've never heard gun control advocates go that far. As I said the last time you did this, even the Nazi's allowed hunting rifles.
Australia's gun control laws are considered pretty extreme. We regulate paint ball guns and ban cross-bows. But we nonetheless allow bolt-action long guns, and limited capacity pistols for people in gun clubs. And we didn't just send in jack-booted cops raiding the hundred thousand homes of gun owners to confiscate their guns, we offered generous compensation payments to those whose previous guns became illegal under the new laws. (Even non-functioning, but repairable, weapons were paid at full market price plus 10%.)
I'm willing to be shown examples to the contrary, but while there's plenty of people who believe that any gun control is inherently a violation of their rights, I haven't seen a single gun-control advocate who advocates banning every gun in existence.
Paul ... it's easy to think of situations where manipulating the news in perfectly legal ways could be profitable. It's probably illegal for an engine manufacturer to release false reports of a breakthrough that would temporarily increase stock prices long enough for its AI to profit from selling short or whatever, but I don't know that it would be illegal for the maker of an especially efficient engine to post a false report of something that would increase fuel prices .... or merely push truthful reports of same ... for the purpose of the same profit opportunity. While humans have probably done this for as long as there were markets in money, AIs can do it better and faster, uncovering relationships not even Sherlock Holmes would see.
Why would they not do it?
You might want to check out the novel "The Fear Index"by Robert Harris. it's not classified as science fiction, which may be why it got under your radar - he has been thinking about it for some time. the book is quite good and goes into quite a lot of detail, but the ending is a bit disappointing.
Randy,
My concern is not so much the small targeted manipulation by real companies (like an engine supplier), but by manipulation by finance companies whose only product is these trades. Because they are not company, or even industry, specific. They are looking for parasitic opportunity anywhere. And, IMO, talking down a volatile market is going to be easier than talking it up (pump'n'dump). So these AI manipulators would be trying to cause a crash every time they identify a suitable pattern, a targetable weakness. But they wouldn't stop. Whereas a human trader, even one who single handedly caused a crash, would sit back and watch the fall-out for awhile before formulating their next strategy, the AI's could keep identifying the next pattern, and the next, causing endless cascading crashes before their human controllers realise what is even going on. Not just triggering a GFC-type crash, but immediately following that with a currency raid on a vulnerable country, then a bank run somewhere else, then a series of targeted bankruptcies across a whole industry. Riding carefully patterned waves of panic and collapse, mindlessly "profiting" each cycle, even as the controlling corporation and its employees and owners are themselves (along with everyone else) being financially destroyed by the collapses.
This doesn't require "skynet" or AGI, it just needs a machine doing what it was programmed to do, ignoring the consequences which it isn't capable of understanding.
The first thing am AI needs to do is get elected to Congress. Then insider trader rules might not apply. That gets the access to future legislation impacting companies. Trade on that instead. Perhaps a human sock puppet will do fine.
As for using the media to move securities price, dumb computers, not AI's are needed. Seed stories that ML has calculated will be effective in moving prices. Use viral techniques to make the story spread. It's probably being done today.
@Liam,
Thanks for the reminder of the title "The Fear Index". I read that book a few years ago, and it's what I think of every time Dr Brin brings up this subject, but I could not recall the title.
It's hard to discuss at all without spoilers, but I do recommend it to anyone reading this topic.
Off topic from this post, but on topic from your 7/5 post. I've provided a link to a pdf. called our Anti-Oligarchy Constitution, that seems to back up David's assertion that the brilliance of our constitution was that it created a horizontal society. It offers very good historical references going through Jacksonian thinking to FDR, and ends with the Citizen's United decision. As to CU decision, it makes the point that the case was argued incorrectly as free speech, and should have been argued as an anti-oligarchy case. Very interesting way of viewing our Constitution, and a I think a way forward in getting money out of politics.
http://www.law.tau.ac.il/Heb/_Uploads/dbsAttachedFiles/FISHKINTAU.pdf
Paul 451, I am not trying to say that both sides are as bad. I think there are individuals who probably embrace what I posted, but they are precious few (and I make no claim as to which side has more such wingnuts)
"Centrist" implies opinions on either side, so I intentionally overstated the extent of the extremes. I think we can agree that there is a middle ground that can be reached.
Sociotard,
" "Centrist" implies opinions on either side,"
The "middle ground" doesn't mean the mid-point between two arbitrary lobby groups, it means the big fat lump in the middle of the bell curve that contains the rest of us. Just because you have two groups taking sides on an issue, doesn't mean they are roughly equidistant from the "centre", so you can't use the lobby groups to identify the centre. Something like 80% of Republican voters support waiting periods, over 90% support background checks and licences. But over 70% of Republicans oppose "any form of gun control". What they've been taught is "gun control", is the same mythical gun-banning "other side" that you've also conjured up. That opposition to mythical "gun control" is then used by the extremists to block actual gun control measures that 80/90% of Republican voters would support.
"so I intentionally overstated the extent of the extremes."
When you invent an entirely imaginary extreme, you don't encourage us to "reach a middle ground", you encourage everyone to hold their own ground even harder. That's why this kind of false equivalence is so harmful, and why I'm jumping on you about it, it reinforces the very refusal to compromise that you think you are somehow arguing against.
"I think there are individuals who probably embrace what I posted, but they are precious few"
Then why present them as equivalent? If there is anyone who is advocating banning all guns for all people, they apparently don't have internet access because I'm damned if I can find a single example.
Whereas the primary anti-gun control lobby, the NRA, is way, way to the extreme of even most of their own members. NRA opposes all Brady Campaign policies, which, in most polling, a majority of the members of the NRA support.
There aren't two extremes on gun control. Most conservatives are actually already clustered around the same region as the Brady Campaign and the majority of liberals, they just don't realise because people like you keep going on about "the two extremes" needing to "meet in the middle". The policies of the Brady Campaign are already in the middle, already accepted by over 90% of Americans. The NRA, otoh, is so far from the middle that you can't see it from there.
That's what pisses me off about the media's self-appointed "centrists". They aren't actually anywhere near the centre. So when they talk about needing to find a "middle ground" on gun control, between the Brady Campaign and the NRA, they are saying we need to find a "middle ground" between what 90% of Americans believe and what less than 5% of Americans believe.
onward
Post a Comment