28 April 2011

Computers and Language: Part 2

I had to do a bit of dusting to this blog, it's been unused so long. My lenten fast from the internet didn't lend itself to writing an entry, which i didn't take into account when i promised a sequel to this entry. Well, now that the cobwebs are gone and the countertops have been wiped clean, let me get to the point.

Every year since 1991, Dr. Hugh Loebner has offered up a prize of $100,000 to whomever can create a computer program that can pass the Turing Test and pass itself off as a human. The test was thought up in the 50s by Alan Turing in a paper that questions how we understand the idea of machine intelligence.

The basic premise for the test pits humans and machines against one another, but not in the way we've seen on film. Neither Schwarzenegger nor Will Smith have any part of this test. The way Turing proposed to test machines was to give a judge five minutes to converse with both a computer and a human, then determine which was which. If computers ever got more than 30% of the vote, Turing stated we could consider them as thinking machines.

In 2009, journalist Brian Christian took part in the test, put on by Loebner. Since the year previous was a dismal year for the Confederates (the humans attempting to prove their humanness), where one more vote for a computer would have put them over the 30% mark, Christian's goal was to be the best humanity could offer against the computer intelligence onslaught.

Two awards are given at the end of the test: Most Human Computer and Most Human Human. By the end of the test, after facing 12 judges in five minute chat intervals, Christian is given the certificate acknowledging him as the Most Human Human of 2009. Where 2008 was a close call, 2009 was hardly a race. Not a single computer was mistaken by a judge to be human.

As i said in Part 1, there is a small bit of chaos in our interactions with one another. In natural conversation, our minds aren't tracking out the next five things we'll say like a verbal chess match, but that's how we design computers. IBM's Jeopardy master, Watson, would be a horrible conversationalist because it is only programmed to respond in a specific and regimented fashion.

On the flip side, some of the conversations from Turing Tests in years past are quite chaotic.

Judge: it looks like i’ve got to go, Catherine. its been nice talking with you
Judge: do you need water or something before the 5 minute rounds?
Computer: OK, yes on balance … Time to get off this one I think and onto something more down to earth!
Judge: like what?
Computer: Almost certainly! let’s move on
Computer: I suppose it depends on where you’re coming from, but as the song goes in My Fair Lady, “Why can’t a woman be more like a man?”

The problem becomes what measure of chaos do we use on a daily basis? When we converse with one another, our personalities mix together like a margarita and create a sometimes tasty sometimes disgusting mix of conversational flow. Topics shift like the tide, ebbing and flowing with ease, revealing previously unknown beaches, covering up that which was recently exposed...

Actually, i've lost where this is going. So, let's regroup.

Oxford philosopher John Lucas said that if we fail and allow our machines to appear more human and pass the Turing Test, it will be “not because machines are so intelligent, but because humans, many of them at least, are so wooden.”

The real test of Turing is not how we can program computers to be more like us, but after the computers have become more human, what does that mean for us? How can we constantly be pushing against ourselves, against humanity, to make us the best versions of ourselves?

To help simplify that down to something manageable, let's go back to Brian Christian's article: 'A look at the transcripts of Turing Tests past is, frankly, a sobering tour of the various ways in which we demur, dodge the question, lighten the mood, change the subject, distract, burn time: what shouldn’t pass for real conversation at the Turing Test probably shouldn’t be allowed to pass for real conversation in everyday life either.'