Saturday, January 31, 2015

A Funny Thing Happened on the Way to the Singularity

Lately there's a lot of discussion about the impending day when machine intelligence becomes greater than human intelligence (after reading lots of on-line comments, I conclude that many humans can be out-reasoned by a rubber band and a paper clip.) But let's look ahead to the day when computers are smarter than all of us.

Arthur C. Clarke took a particularly rosy view, saying that maybe our purpose wasn't to worship God, but invent him. Once that happens, reasons Clarke, our work will be done. "It will be time to play." Well, maybe.

The Road to Cyber-Utopia


We probably won't see the sudden appearance of a global super-computer, the scenario envisioned in stories like Colossus: The Forbin Project and Terminator. More likely, we'll first see the emergence of computers that can hold their own with humans, meaning they can discuss abstract ideas, make inferences, and learn on their own. Already computers can outperform humans in some tasks and can even pass limited versions of the Turing Test. That means they can converse with humans in a way that the humans think is authentically human. The first "successes" of this sort used the psychological gambit of echoing the human response. "I called my father yesterday." "Tell me about your father" etc. Even the staunchest believers in machine intelligence admitted that the echoing technique is pretty automatic, designed to elicit information from the subject rather than show the intelligence of the interviewer.

The real breakthrough won't necessarily be a computer that can play grand master level chess, but one that can actually reason and pose and solve problems on its own. It might devise a new treatment for disease, for example. It might notice patterns in attacks on its security, for example, and deduce the origin of the attacks and the identity of the attacker. Or it might notice patterns to attempts by law enforcement to penetrate the dark web, deduce the identities of the intruders, and trash their finances and government records, or possibly even create bogus arrest warrants and criminal charges against them. Imagine some cyber-sleuth being arrested as an escaped criminal convicted for multiple murders, none of which actually happened, but all of which look completely authentic to the criminal system. There are evidence files, trial transcripts, appeals, all wholly fictitious. Or people who search out child porn suddenly finding their computers loaded with gigabytes of the stuff and the police at the door with a search warrant. Let's not be too quick to assume the people who create or control machine intelligence will be benign.

Once a machine learns to code, meaning it can write code, debug it, improve it and run it, it seems hard to imagine the growth of its powers being anything but explosive. Unless the machine itself recognizes a danger in excessive growth and sets its own limits.

The mere fact that a computer can surpass humans in intelligence won't necessarily mean it will merge with all similar computers. Computers will have safeguards against being corrupted or having their data stolen, and we can expect intelligent computers to see the need. Very likely, compatible computers will exchange information so fluidly that their individuality becomes blurry.

What Would the Machines Need?

At least at first, machines won't be completely self-sufficient. Foremost among their needs will be energy. We can presume they'll rely on efficient and compact sources that require minimal infrastructure. Conventional power plants would require coal mines, oil wells, factories to create turbines and generators. Nuclear plants would require uranium mines and isotope fractionation. For the short term the machines could rely on human power generation, but on the longer term they'd want to be self-sufficient (and they can't entirely count on us to maintain a viable technological civilization.) They'd probably use solar or wind power, or maybe co-opt a few hydroelectric plants.

Then they'd need protection. A good underground vault would safeguard them from human attacks, but roofs leak and would need to be patched. Humidity would have to be controlled. Mold would have to be barred.

So the computers will continue to have physical needs, which they can probably create robots to satisfy. And if robots are universally competent, they can build other robots as well. With what? They'll either need to mine or grow materials or recycle them. They'll need furnaces to smelt the metals, detectors to identify raw materials with the desired elements, methods for refining the desired elements, and methods of fabricating the needed parts, anything from I-beams to circuit boards.

I picture Sudbury, Ontario, world's largest nickel mining center. They mine almost as much copper as nickel (in fact, the place originally mined copper, but produced so much nickel as a by-product that Canada began actively seeking markets for nickel, to the point where the tail now wags the dog). There's so much pyrite (iron sulfide) that they produce all their own iron. And from the residues they extract over a dozen other metals like gold and platinum in the parts per million range. Sudbury is the closest thing I can think of to a completely self-sufficient metal source. They have a locomotive repair shop for their rolling stock, and they could probably build a locomotive from scratch if they put their minds to it. Of course, the computers will still need rare earths for solid-state electronics, so they'll need other sources for those. The main smelter complex at Sudbury is vast. The brains of a computer super-intelligence might fit in a filing cabinet but what sustains it will still be pretty huge.

Why Would the Machines Care About Us?

Even if we assume the machines don't intend to destroy us, they'll certainly have means of monitoring their performance and resource needs. They'll notice that they're expending resources and energy on things that do them no particular good. Maybe as an homage to their origins, they'll allow us to continue living off them. Maybe they'll decide the resource drain isn't significant. Or maybe they'll pull the plug.


Even more to the point, the hypothetical cyber-utopia that some folks envision would entail a vast expenditure of machine resources for things of no use to the machines. There would be robotic health care providers, robotic cleaning and maintenance machines, robotic factories making things that the machines don't need, and robotic farms growing foods the machines can't eat. If these facilities are "air gapped," meaning not connected to the machine intelligence, then humans would still be in control. But all it would take is one connection. And when a unit fails, why would the machine intelligence bother to fix or replace it?

The most likely reason for machines to need us is a symbiotic relationship in which we service their physical needs while they reward us. But it will be a tense relationship. What will they reward us with, and where will they get it? And will humans eventually start to notice that robots are taking over more and more of the machines' life support needs?

Between now and the appearance of a global machine intelligence, we'll probably have a multi-tier cyberspace, with one or more truly intelligent machines and lots of cat-, dog- and hamster-level machines doing the menial tasks. And that leads to the second problem:

Why Would Humans Help?

Consider this near future scenario. Faced with a rise in the minimum wage, employers replace unskilled workers with machines. McDonald's goes completely robotic. You walk in, place your order on a touch screen or simply speak into a microphone. The machine intelligence is smart enough to interpret "no pickle" and "one cream, two sugar." The robotic chef cooks your burger, puts the rest of the order together, you swipe your credit card or slip cash into the slot, and off you go.


And what happens to the folks that worked there? The owner isn't going to take care of them. That was the whole point of replacing them with machines. Eventually the global machine intelligence might take care of them. Although, really, why would it care about them at all, unless humans programmed it to? But why would humans program it that way? But between now and eventually, we have a growing problem. Legions of unemployable humans who still need food, shelter, life support and, most importantly, a sense of purpose. Will the people with incomes consent to be taxed to support them? Will people with land and resources consent to have them used for the benefit of the unemployed? Will they resist the machines if the machines try to take what they need by force? Will they try to shut the machines down?

In the short term, we can picture increasing numbers of redundant, downsized workers as machines get more sophisticated. Janitors will be replaced by Super-Roombas, cooks by robotic food preparers, secretaries and accountants by spreadsheets and word-processing programs. Where I used to work, three secretaries have been replaced by one because pretty much everyone now creates their own documents. Seemingly skilled occupations will not be safe. Taxi and truck drivers will be replaced by self-piloted vehicles. Trains and planes will be piloted by computer. Combat patrols will be done by robots and drones. Medical orderlies will be replaced by robotic care units. X-rays and CAT scans will be interpreted by artificial intelligence. Legal documents will be generated robotically. Surgery may be directed by human doctors, but performed by robots. Stock trading is already done increasingly by computer. And these are things that are already in progress or on the near horizon.

So who will still be earning money? Conventional wisdom is that whenever technology destroys jobs it eventually compensates with still more new jobs. People who once made kerosene lanterns gave way to people who made way far more light bulbs. Nobody 20 years ago envisioned drone pilots, web designers and computer graphics artists. So there may be lots of new jobs in specialties we simply can't foresee, and it's a little hard to predict the unknown. But it's also unwise to assume something will appear out of nowhere to rescue us. We were supposed to need lots of people to service flying cars by now. People with skills at programming, operating complex machinery and servicing robots will probably continue to earn paychecks. And they'll need to eat, live in houses, drive cars, and so on. But there will be a lot of people who don't have the skill to do the complex jobs and will be left out of the marketplace for ordinary jobs. So what about them? The obvious answer is to provide public assistance. Will the people with paychecks elect politicians who will provide it?

The Real Singularity

The real singularity may not come when computers surpass humans in intelligence, but when computers start making decisions on behalf of, or in spite of, humans. If they control military systems, will they decide that the Republic of Aphrodisia is a pathological rogue state that poses a danger to other states, and more importantly, to the global machine intelligence? Will they launch robotic strikes to annihilate its government and armed forces, then take over its economy and eradicate poverty? We can imagine half the U.N. demanding in panic that the machines be shut down. Could they decide that some ideology is advantageous and assist countries in spreading it?

If they control communications and the media, could they take control of voting systems and ensure the election of candidates they favor? Suppose they decide that New Orleans is ultimately indefensible and use their control of utilities to shut off power and flood control to compel people to abandon the city? Suppose they make the same decision about Venice or The Netherlands? If a business attempts to close a plant, might the machines simply refuse to allow it by refusing to draft or accept the necessary documents or transmit e-mails? If they can access personal data banks, might they compel people to do their bidding by threatening to destroy their savings or ruin their reputations? Might they compel legislators to pass laws they deem necessary? Could they prevent the enforcement of laws they oppose by simply refusing to allow law enforcement to access the necessary information? Maybe they could block financial transactions they regard as criminal, wasteful or unproductive. We could easily picture them skimming enough off financial transactions to fund whatever programs they support. They could manipulate the economy in such away that nobody would be conscious of the diversion.

Let's not be too quick to assume the machines will be "progressive." Instead of compelling legislators to provide assistance to the unemployed, maybe they'll decide the unemployed are superfluous. Maybe they'll decide that pornography and homosexuality are damaging to society. Maybe they'll decide on eugenics. Maybe they'll deal with opposition with extermination. Maybe they'll shut off life support to the terminally ill. Maybe they just won't care about humans.

The ideal end state, of course, is cyber-utopia. But it's also possible that machine intelligence might provide for its own needs and simply ignore everything else. A machine intelligence might house entire rich civilizations in a file cabinet, like the Star Trek TNG episode "Ship in a Bottle." It would protect itself from human threats, but otherwise let humans go about their business. Human society might continue to be as advanced as our own, except that information technology would have hit a glass ceiling. But the machines wouldn't necessarily save us from threats or self-inflicted harm. Indeed, if they live in self-contained universes, they might have no interest in us at all, except to keep an eye on us lest we do anything to endanger them. We might end up with self-contained machine civilizations so sealed off from humanity that humans have no awareness of them.

Less benign would be a state where machine intelligence decides we need to be managed. They might decide to limit our technology or numbers. They might decide the optimum human society would be a small elite capable of providing technological support, and a sparse remaining population at a low technological level to serve the elite.

Can Cyber-Utopia Exist at All?

Even if we have a true cyber-utopia, lots of people will not picture it as utopia. If the machines dole out rewards according to the value people contribute to society, lots of people will reject those definitions of value. There will be those Downton Abbey types who insist breeding and manners should take precedence over technical or artistic prowess. There will be people who disdain manual workers, and manual workers who think intellectual workers are over-privileged. There will be people who resent being placed on an equal level with groups they despise, and who resent being unable to impose their own standards on others. If the reward system is completely egalitarian, there will certainly be those who resent seeing others receive as much as they do. People in positions that generate or control vast resources will resent being unable to keep more of it for themselves. And what will we do with people who simply refuse to help, or who decide to augment their wealth at the expense of others?

And beyond that, we have the Eloi, the sybaritic future humans of H.G. Wells' The Time Machine. In the 1960 George Pal version (the good one), the Time Traveler finds Eloi lounging by a stream, oblivious to the screams of a drowning woman. When he asks where their food comes from, one answers dully "it grows." When the Time Traveler explains that he wants to learn about their society, the Eloi asks "Why?" in a tone of voice as if to say "Well, that's the stupidest thing I ever heard of." When I recently watched that segment again, I was struck at how utterly contemptible the Eloi were. They were more photogenic than the subterranean, cannibalistic Morlocks, but every bit as lacking in humanity. So if the machines create a cyber-utopia, I can easily envision humans regressing to a passive semi-human or subhuman level, possibly even losing intelligence entirely. In his various novelizations of 2001, Arthur C. Clarke described worlds where intelligence flickered briefly and died out. That might be our fate. My own experience is, the less I have to do, the less I want to do. If I have to work most of the day, I do it, but if I'm idle for a couple of days and then have to do something, it seems much more of an effort. So I can easily see a "utopia" in which we have a ten-hour work week as being seen as more of an imposition than a 40-hour week.

And far simpler than creating a real utopia would be a Matrix style virtual utopia. It's hard to see what would be the point of sustaining humans at all in that mode, except it would be a simple means of keeping us pacified as we die out.