Mankind has been defeated in Go. This may turn out to be a big deal. I had been an AI skeptic before, but now I am worried that AI might be for real.

Lockhart told me that his “heart really sank” at the news of AlphaGo’s success. Go, he said, was supposed to be the “the one game computers can’t beat humans at. It’s the one.”

New Yorker - In the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future?

For those who aren’t computer people, it should be said that beating Go, a traditional Chinese board game, has been a goal of AI researchers for decades, and it has long been though impossible without solving the problem of pattern matching. Yes, some of this fascination has been because of the orientalism inherent in the project, but the main point has been that due to its numerical complexity, the game could only be solved if one could program a machine to recognize patterns in a more-or-less human-like manner.

Deep Blue was a statement about the inevitability of eventually being able to brute force your way around a difficult problem with the constant wind of Moore’s Law at your back. If Chess is the quintessential European game, Go is the quintessential Asian game. [Ed. note: I wasn’t kidding about orientalism.] Go requires a completely different strategy. Go means wrestling with a problem that is essentially impossible for computers to solve in any traditional way.

Jeff Atwood - Thanks For Ruining Another Game Forever, Computers

Before I speculate any more about What It All Means, let me explain the difference between this and other defeats like Deep Blue and Watson. Deep Blue essentially used a brute force algorithm to play chess.* It gamed out every possible play N-moves down the road, computed the score at that point, then choose the move that was optimal by assuming its opponent would also choose an optimal move at each step.† This is not like how a human being thinks about chess, and it resulted in computer chess players making moves that seem bizarre or erratic to a human being.

* This description is technically incorrect, but close enough for lay purposes.

† Again, not really, but close enough.

Compare this with AlphaGo. Yes, AlphaGo plays like a machine, but not like an erratic machine according to knowledgable commentators:

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

Wired - The Sadness and Beauty of Watching Google’s AI Play Go

The less said about Watson, the better. It was just Siri with an even bigger marketing budget. (“What is Toronto?????” was its [low-confidence] answer to a question in the category U.S. cities.)

It’s also important to understand why past Jetsonian visions haven’t paned out. Rockets in particular have always been limited by the ideal rocket equation, which dictates that a rocket’s maximum potential speed is limited by the energy density of its fuel. We knew in the 1960s that the only way to get past the moon would be with higher density fuels (or magical warp drives), but we pretended like those discoveries were bound to happen because we were still in the neck of an S-curve. Engineers sketched plans for nuclear powered rockets to try to get to the next rung of the power ladder, but for various reasons, it never happened. Similarly with air travel, as Maciej Cegłowski noted, “It turned out that very few people needed to cross an ocean in three hours instead of six hours,” so it wasn’t worth the diminished returns.

For that reason, we can be confident about some things that won’t happen in our cyber-dystopian AI future. To quote Mr. Cegłowski again,

Intel could probably build a 20 GHz processor, just like Boeing can make a Mach 3 airliner. But they won’t. There’s a corrollary to Moore’s law, that every time you double the number of transistors, your production costs go up. Every two years, Intel has to build a completely new factory and production line for this stuff. And the industry is turning away from super high performance, because most people don’t need it.

The singularity—the idea that computers will get so smart that the rate of progress will accelerate beyond our ability to predict—is a wrongheaded idea because it is either already trivially true or physically impossible depending on what is meant. It is trivially true that no one can predict the future of technology, and since the 1970s, computer chips have been laid out using computer-aided design, thereby accelerating their own progress. But the idea that some day an AI of IQ 150 will sit and think about it and figure out how to build an AI of IQ 160, which will build an AI of &c. is nonsense. The limiting factor in the creation of better AI is not raw intelligence but insight and experience. We can’t fab tinier chips because we don’t know how, not because we are limited by the IQs of existing engineers. The whole point of the scientific revolution was that you can only learn a limited number of things through a priori thinking. Real breakthroughs take creativity plus a posteriori experiments.

Another thing to be clear about is how bad computers have been at processing big data until recently. Noam Chomsky made his bones as a linguist by showing that behaviorism couldn’t be right because of the “poverty of the stimulus” to account for the linguistic abilities of children. Kids are able to create sentences that they have never heard before using their innate grammar as a guide. The reverse of this is Google Translate, which has “hundreds of millions of documents to help decide on the best translation”, but cannot reliably produce grammatical output. A six year old can more easily produce grammatically correct sentences than Google Translate can with its overabundance of stimulus. Hence Google Translate’s algorithm must be seem weak rocket fuel that peters out before it gets into orbit, never-mind Pluto.

But in spite of these limitations, I do think there has been a fundamental shift in AI recently. Recent projects like Deep Dream are modeled more like biological neural networks (to the point that no one really knows how they work), and produce much more recognizable results. Yes, they can be tricked into thinking a panda is a vulture with perverse input, but human cognition is also liable to optical illusions and other gestalt problems. These deep neural networks appear to be capable of creating animal-like intelligence in a way that was not previously possible.

There are still limitations to this research. Essentially, to set the initial values for the deep neural networks, the computer must be primed with pre-classified data. So, before you can teach a computer to recognize a picture of a dog, you need a set of N photos sorted into “dog” and “not dog” groups. However, after the neural network has been sufficiently primed, it can recognize dogs with a precision that surpasses humans. Of course, we have had facial recognition technology for several years. How this work is that the ratio between your eyes, nose, and mouth is relatively unique to each person and stays relatively fixed throughout one’s life and under various image transformations. In other words, we found a hacky shortcut to figuring this out. What’s scary about neural networks is that we don’t need to find a hacky shortcuts anymore; we just need to shovel in enough pre-classified data, and the deep neural network does the rest.

AlphaGo in particular, while dedicated to beating Go, and at no risk of passing the Cartesian test, uses a deep neural network to overcome the problem of scoring in Go. In chess, you can add up a score for the board in terms of how many pieces remain on the board. In Go, generally only a Go master can even tell who is winning or losing because once the loss of a position is guaranteed, a good player will let it go and focus on another area of the board. To get around this, AlphaGo just “watched” a large number of games played by humans until it could recognize a good move just as easily as Deep Dream can recognize an eyeball. It’s not just some clever hack. It’s the same hack our own brains use.

The implication of AlphaGo is that the dystopia we will be/are living in is the Player Piano future: one lathe operator teaches the machine how to use a lathe, then all the lathe operators are unemployed once the machines take their jobs. The limitation of this is that, as with biological neural networks, no one can explain how a deep neural network “really” works, which means it cannot be enhanced beyond its input. So, goodbye progress in lathe design once all the workers are unemployed! The training data basically just “tunes” a number of hidden “knobs” to the right values, but because we don’t know why the values it selects are the correct ones, we have no ability to move forward once AI takes over. AlphaGo and DeepDreams appear to indicate that deep neural networks have finally cracked animal intelligence, but creative, human intelligence remains a further, nuclear rocket-level advance away. As all the animal-like jobs are replaced the economy will settle into a period of high unemployment and low innovation, which will lead to social unrest followed by unpredictable outcomes.

So, so long human race. It was cool while it lasted.

Oh-hyoung Kwon, a Korean who helps run a startup incubator in Seoul, later told me that he experienced that same sadness—not because Lee Sedol was a fellow Korean but because he was a fellow human. Kwon even went so far as to say that he is now more aware of the potential for machines to break free from the control of humans, echoing words we’ve long heard from people like Elon Musk and Sam Altman. “There was an inflection point for all human beings,” he said of AlphaGo’s win. “It made us realize that AI is really near us—and realize the dangers of it too.”

Wired - The Sadness and Beauty of Watching Google’s AI Play Go