Long before Steven Spielberg cast Haley Joel Osmond as Creepy Jr. in A.I., and before guys started marrying their video game girlfriends, way back in the early 1950’s, a small group of physicists and mathematicians coined the term “Artificial Intelligence.” While the group remained small for a few more years, the dawn of the computer age has seen this field blow up exponentially.
Ray Solomonoff was a member of that small group; an early pioneer in the field of artificial intelligence whose work showed that to achieve artificial intelligence, we must first understand intelligence on a systematic level. Solomonoff died on December 7, 2009, at the age of 83 of a ruptured brain aneurysm.
To understand the impact of Solomonoff, you have to go back to Alan Turing.
Alan Turing committed suicide by eating an apple dipped in cyanide. A dramatic end to a truly amazing life. Turing is not only hailed by the physics and computer science community, but by philosophers and the gay community. Turing was a homosexual, which was a criminal offense in Britain in the early 1950’s. Despite doing a tremendous amount of work for the British government during World War II, he was forced to go on hormone treatment which some doctors believed cured homosexual desires.
The Turing Machine was used during WWII to crack German codes and it’s contribution had an immeasurable impact on the success of the Allies. The machine was really the first computer. While the concept of an algorithm was not created by Turing, he did formalize it, and put into practice the idea that machines could “think.” An algorithm, very simply, is a sort of road map that a machine can follow. “If you come to a fork in the road, then go left.” Except the maps become very intricate very quickly and can help the computer arrive at the solution to a mathematical problem.
Turing also came up with the theoretical Turing Test, which he believed could tell us when we had actually achieved true artificial intelligence. If a human being speaking to the computer cannot tell the difference between the computer and real human, then you have artificial intelligence (Terminator Salvation, anyone?).
But I digress.
It goes without saying that you need more complex algorithms to solve more complex problems. Turing machines could be programmed with different algorithms that could either do a great deal of work, or potentially solve a problem that humans had been able to solve. However, Turing and some of his collaborators showed that some algorithms, when asked to solve certain problems, could end up running infinite loops. The algorithm couldn’t find an answer, and would continue circling around itself trying to find it. Turing found that algorithms had limits. Algorithms don’t need to explain to you why they arrive at one answer, so why couldn’t mathematicians just ask an algorithm to tell them if un-proven problems were true or false? It turns out that algorithms have limits.
To solve very difficult problems computer scientists, physicists and mathematicians began to incorporate probability into algorithms. If you wanted an algorithm to tell you what the outcome of a dice roll would be, you couldn’t make it predict the future, but you could give it enough information to deduce the probability of the roll.
Solomonoff’s great contribution was Algorithmic Probability. Prior to the 1960’s, the method of calculating probability was determined simply by the number of positive outcomes versus the number of trials (the probability of getting heads when you flip a coin is a ratio of the number of times you flip heads versus the total number of times you flipped).
Solomonoff’s theory of probability requires that we look at the world like computers: in a series of 0’s and 1’s. Everything that comes through a computer, from pictures of cats to mathematical equations, is represented in the core of that computer as a series of 0’s and 1’s. The order of the series tells the computer what to do.
This isn’t a bad way to look at the world. It’s very definite. It gives you a unique way to describe something that is happening.
So Solomonoff decided that probability in algorithms could better be described as how easy it was to describe a series of 0’s and 1’s. For example, it’s rather easy to describe the series 0101010101. It is five repeats of ’01’. However, the series 011110010010111 is random. The easiest way to describe it would just be to say it. The less complex the description of a series is, the higher it’s probability. Furthermore, the probability can be changed based on the input data given to a computer, so it can get better at predicting a sequence of numbers depending on how much it has learned.
Solomonoff’s theory applies to artificial intelligence because it provides a model for learning. Once again, in order to create artificial intelligence we first need to understand what real intelligence is. As we grow and progress and humans we acquire experience and transfer that into knowledge. The more input data we are given the better we become at dealing with new situations. In a sense we become better at predicting the next sequence of events: we have previous information to help us predict what the next series of events will be.
Unfortunately, Solomonoff’s theory on Algorithmic Probability is un-programmable, and incomplete. Only estimations of it can be used in computers. But even these estimations are better than the old methods.
Some interesting thoughts on his passing from Foresight.org and The New York Times.