Watson is to Jeopardy! what Deep Blue was to Chess, but the differences between those two games make Watson a much more impressive machine. On Valentine’s Day, Watson competed against two well-known Jeopardy! champions, Ken Jennings and Brad Rutter, with the aim of besting both of them. The easiest way to familiarize yourself with Watson is by watching that episode, which is currently available on Youtube:
Some additional information is available from the IBM website:
In an historic event, in February 2011 IBM’s Watson computer will compete on Jeopardy! against the TV quiz show’s two biggest all-time champions. Watson is a computer running software called Deep QA, developed by IBM Research. While the grand challenge driving the project is to win on Jeopardy!, the broader goal of Watson was to create a new generation of technology that can find answers in unstructured data more effectively than standard search technology.
Watson does a remarkable job of understanding a tricky question and finding the best answer. IBM’s scientists have been quick to say that Watson does not actually think. “The goal is not to model the human brain,” said David Ferrucci, who spent 15 years working at IBM Research on natural language problems and finding answers amid unstructured information. “The goal is to build a computer that can be more effective in understanding and interacting in natural language, but not necessarily the same way humans do it.”
Notice the emphasis put on human thinking vs. computer thinking: Watson isn’t meant to replicate the human thinking process. In fact, all of its power is dedicated to understanding and interpreting human language, and then to providing answers based on the nature of that language. That may look like an extraordinarily easy task at first, but keep reading:
The questions on this show are full of subtlety, puns and wordplay—the sorts of things that delight humans but choke computers. “What is The Black Death of a Salesman?” is the correct response to the Jeopardy! clue, “Colorful fourteenth century plague that became a hit play by Arthur Miller.” The only way to get to that answer is to put together pieces of information from various sources, because the exact answer is not likely to be written anywhere.
Watson is made up of 2,880 processor cores, each running DeepQA software specially designed for the task of receiving and answering questions quickly and accurately. According to IBM, it holds the equivalent of 1,000,000 notebooks worth of information—that’s laptop notebooks—and has been fed data from Wikipedia, Project Gutenberg, encyclopedias, and text data from commercial sources, among many others. Over 100 algorithms are necessary for this process, which includes analyzing information and returning plausible responses.
Yet, even with all this computing power, Watson doesn’t come close to thinking in the same way that humans do. IBM admits this, but thinking closely about how Watson works makes it evident.
When I recognize the lyrics to a song by the Beatles, for instance, I do not cross-reference huge stacks of data and return a list of possible answers ranked according to the chance that they’re right. In fact, when I think about the lyrics to “Eleanor Rigby,” they have a strange, almost ghostly quality in my memory. The lyrics are just there, without any amount of processing necessary to recall them. Even with lyrics I only vaguely remember, the sensation is much the same. I think about the words I already know, and then reconstruct the rest a little at a time. But, at each moment of the reconstruction, the missing words appear as if by magic. I am not consciously generating a series of possible matches to a song, but remembering them according to some other logic, which I don’ t think I can fully explain.
Last week, I posted an article about RoboEarth, a sophisticated networking utility that will help robots “learn” information that other robots already “know.” I was curious how a robot could be said to learn anything at all, and that puzzle reminded me of Meno’s Paradox, which you can read about on Wikipedia or at Stanford’s Encyclopedia of Philosophy. The short version can be summed up something like this:
- How can you look for something if you do not know it exists or have no knowledge of it? Having knowledge of it includes being told by someone else that it might exist.
- How can you search for something about which you already know? Or, how do you search for something you’ve already found?
- If knowledge is acquired, then both of these will be impossible.
- But, if knowing something is actually equivalent to recollecting or remembering it, then searching for knowledge will be possible, but only insofar as we actually already possess the knowledge.
The theory looks a little absurd at first, especially because Socrates posits it by telling a mythological story. Thinking about Watson, however, the theory looks a little less crazy. When I remember lyrics, I recollect them from previous experiences, I do not somehow compute the possibility of their being correct based on algorithms. Remembering, at the very least, is not a matter of computing possibilities.
The theory of recollection can be taken much further than that, but I don’t want to venture too far in that direction, especially since I can already see Kant’s theories of space and time looming on the horizon like hungry intellectual predators. It is enough for me to point out that Watson’s memory, composed of terabytes and terabytes of information, does not function in the same way as mine does, at least so far as I can tell by examining myself in the process of remembering.
All of this leads me to an article I read on the BBC on Saturday morning, which reported the estimated amount of data stored by the human species as a whole:
The study, published in the journal Science, calculates the amount of data stored in the world by 2007 as 295 exabytes.
That is the equivalent of 1.2 billion average hard drives.
The researchers calculated the figure by estimating the amount of data held on 60 technologies from PCs and and DVDs to paper adverts and books.
“If we were to take all that information and store it in books, we could cover the entire area of the US or China in 13 layers of books,” Dr Martin Hilbert of the University of Southern California told the BBC’s Science in Action.
Two-hundred and ninety-five exabytes is an impressive number, and I take it to include all the information we have published about very sophisticated topics, like artificial intelligence, astrophysics, aeronautics, optics, and string theory. Imagine a computer like Watson having access to that kind of data; it would remain a very sophisticated search engine, but a search engine that could answer any student’s questions just so long as the answer was actually out there somewhere.
Still, even with all that data, the Ultra-Watson would pale in comparison to the data-storing and question-answering capabilities of the human individual. Dr. Martin Hilbert, from the University of Southern California, told the BBC that “The Human DNA in one single body can store around 300 times more information than we store in all our technological devices.”
Whether that means the human brain is capable of storing more information than all the collected data on the planet isn’t clear, but I doubt it. What that estimate tells me is that the human individual’s thinking process is governed by a massive amount of data, some of which must be stored in our DNA. I’d love it if a geneticist could explain precisely how that might work, but I think I can make a pretty good guess. However it works, the human brain is capable of innovating and answering questions in a wholly original way.
Watson depends on human ingenuity to answer questions, then reports based on the already supplied information. It could not, however, create answers based on incomplete data. For instance, Brian Greene could not ask Watson about string theory and expect to get an answer not already on the books. I could, however, ask Brian Greene to speculate in a way that Watson can not.
But, does the difference between human and digital recollection help solve the difference between human and digital thinking?
I think it gets us part of the way there.
Materialists like to maintain that humans are essentially very sophisticated computers, and on the surface that argument is pretty attractive. What we know about DNA, electricity, the brain, and chemicals all foster that interpretation. On the other hand, our very best technology is only capable of producing a computer that roughly understands that nuances of the human language. Processing that kind of information is impressive, but still falls short of thinking, reasoning, or even improvising. Perhaps sufficiently more complicated software and hardware will help engineers develop a more thoroughly human Watson computer, but such improvements will not change the essence of Watson’s computing, which depends upon algorithms to function properly.
So far as I can tell, I do not use such algorithms in my own thinking, nor do I process data in a similar way when writing poetry or performing music. That does not mean such processes aren’t somehow at the heart of my brain’s activity, but I have absolutely no sense of those processes and no reason to believe that I compute when I think about art, music, or even mathematical equations. In fact, when I solve algebra problems, the equations do all the computing, I merely fill in the data according to laws I’ve memorized.
Instead of computing, I think.
My brain does not execute commands, it operates in an open-ended fashion, one that permits me to be wrong not because I have computed incorrectly or forgotten my decimal places, but because I have forgotten information or mis-interpreted input.
And that gets us back to Meno’s Paradox. I am sometimes tempted to buy Socrates’s mythological explanation wholesale. After all, I believe humans have souls and that our non-material aspects somehow command our material parts. Watson’s material cannot forget that the city of St. Louis, Missouri was founded on February 15th, 1764 unless the material itself fails—i.e., the hard drive crashes or is somehow compromised—but I can easily forget such information. In that case, knowing or learning it is a matter of remembering it, not computing it.
What is remembering, then? By that I mean, is the material of my brain remembering, or is there another agent at work? Something like a consciousness or soul or spirit, call it what you will.
What is certain is that there must be a difference between computing, processing, or executing, an ability possessed by sophisticated machines, and the power of thinking or remembering, which humans possess. Even if our genes, which contain monumental amounts of data, inform the way we process information, they do so on a level that no computer has yet matched. Given that much data, Watson would still fail to learn, because it would still be processing data in a way that humans don’t.
Think about it. When’s the last time you made those kinds of computations while riding a bike in traffic? Doing that in Boston would be suicide.