25 February 2011

Computers and Language: Part 1

Last week, peppered amidst the news of uprising in the Middle East and Northern Africa are reports of a more more sinister nature. Well, according to the conspiracy theorists among us. For the rest of us, the computer Watson and his romp on Jeopardy was simple, light fun and a small precursor to what the future may hold for us.

This video is of the first half of the first episode airing the matchup of Watson against the two best contestants humanity could offer, Ken Jennings and Brad Rutter. Feel free to watch the whole nine minute ordeal, but the section that interests me is at the beginning when IBM has a small documercial (documentary commercial, a word i made up for this sentence) about Watson and the team who designed him.

With Watson and IBM so prominent in the news, articles are cropping up left and right concerning machine intelligence and the day, looming ever closer, we can refer to a machine as a thinking entity.

So far, we are safe from a machine uprising. In the first of two games, Watson went into Final Jeopardy with a commanding lead over the two human opponents. Under the category of ‘U.S. Cities,’ the clue given somehow managed to trip up Watson and leave even non-Jeopardy players scratching their heads.

‘Its largest airport is named for a World War II hero; its second largest for a World War II battle.’

The correct answer: Chicago.

Watson’s answer: Toronto.

Obviously, and most North Americans would know this, Toronto is not a U.S. city, residing as it does in the grand country known as Canada.

Jennings and Rutter both answered correctly, so why did Watson get the answer so very wrong? Steve Hamm at IBM, through their Smarter Planet blog through their had an answer for us as to why Watson was so very wrong:

First, the category names on Jeopardy! are tricky. The answers often do not exactly fit the category. Watson, in his training phase,  learned that categories only weakly suggest the kind of answer that is expected, and, therefore, the machine downgrades their significance.  The way the language was parsed provided an advantage for the humans and a disadvantage for Watson, as well. “What US city” wasn’t in the question. If it had been, Watson would have given US cities much more weight as it searched for the answer. Adding to the confusion for Watson, there are cities named Toronto in the United States and the Toronto in Canada has an American League baseball team. It probably picked up those facts from the written material it has digested. Also, the machine didn’t find much evidence to connect either city’s airport to World War II. (Chicago was a very close second on Watson’s list of possible answers.) So this is just one of those situations that’s a snap for a reasonably knowledgeable human but a true brain teaser for the machine.


The problem Watson ran into what partly due to its programing and training--not putting as much weight on the category as was necessary--and partly due to the wording of the question.

Jeopardy, as difficult as it can be due to the puns and wordplay often involved in the clues, is a static medium. There is always a category, always a clue, and always an answer question that answers the clue and fits within the category. It’s not the fluid, dynamic web of language we call a conversation.

As Dr. Katharine Frase states in the documercial, normal humans communicate more in a style she calls ‘open-question answering.’ There is a small bit of chaos in our interactions with one another.

This idea of chaotic conversation leads to more roads, and another blog for next week.

For the record, the final totals after the two games left Watson and IBM with $77,147, Jennings with $24,000 and Rutter at the bottom with $21,600. IBM says it will donate the money to charity.

No comments:

Post a Comment