The Turing Test, as defined by British mathematician Alan Turing in 1950, portends that if a computer can fool enough humans into thinking that it itself is human, it can be considered to have the same level of intelligence as a human.
Then, as the dystopian among us would have you believe, they take over.
I had four thoughts on the subject:
1) Bullshit. That's just a chatbot.As soon as the claim was made, it was challenged. Experts called into question everything from the low number of judges it convinced (10 out of 30) to the fact that the test was undertaken with stipulations on what kind of human this was supposed to be -- specifically, a 13-year-old Ukrainian boy who spoke English as a second language.
I mean, come on, then do the test in Ukrainian.
But the most obvious detraction is that Eugene is just a chatbot.
Human conversation is not a tennis match. It has stops and starts, it has people talking over one another, it builds on the ideas from the other participant. This chatbot, like all chatbots before it, immediately fell into a generation-old chatbot routine:
Those are the four rules on which Eugene acted. If you know the four rules, you can debunk him in about ten seconds. Give it a try.
I'm not talking as an expert in automated content creation here. I'm talking as someone mildly versed in the art of conversation. In other words, I've actually talked to strangers before.
2) Automated Insights already did this in a much more controlled studyEugene tricked 33% of the judges, at least one of whom was credentialed as a judge by being a television star.
A few months ago, content we created at Automated Insights convinced 37% of university students studying media and communications that it was written by a human. The results were then published in the 2014 issue of Journalism Practice.
We didn't win anything. And as the designer of those articles and the developer of the algorithms and the artificial intelligence, I would have enjoyed whatever cash and/or adulation within the scientific community that was coming to me. I don't know, do they give scientists cool cars? Isn't that how Doc Brown got the DeLorean in the first place?
3) A chat version of the Turing Test is no longer useful in measuring intelligenceWith a generation now versed in online conversational etiquette, judging intelligence via Turing over an online chat is no longer relevant.
Online chats between humans themselves have developed their own tics and routines, all of which I absolutely hate. They are sterile, awkward, boring, slow, and, for lack of a better word, stupid. It's no wonder you could fool a few people, especially anyone who has experience with an online chat, that a chatbot was the equivalent of a human participating in an online chat.
You want to impress me? Do the Turing Test with voice, a communication tool infinitely more complex than online chat. And I won't even get into visual cues, which up the ante exponentially.
4) Machines that can think are science fiction, not scienceEven then, even if the machine passes Turing's test, you have a machine that can mimic a human, not one that possesses the equivalent higher intelligence of a human, let alone a machine that can think, let alone a machine that has consciousness, which is really what you're looking for when you're looking for the dystopian Skynet humachine.
I know that's kind of a bold call, but for every moral, technical, physical, and chemical argument that can be made, there is an equally convincing counter-argument. It's still mostly philosophy at this point, but where science currently stands for now and the foreseeable future is:
People != Machines
Machines != People
So you kind of have to run with that.
But even after all the debunkery went around the Internet, news outlets like CNN were still posting stuff like this:
"For the first time, a computer program passed the Turing Test for artificial intelligence� And that outcome means we need to start grappling with whether machines with artificial intelligence should be considered persons, as far as the law is concerned."
This. This is exactly why people mistrust machines. It has nothing to do with the machines themselves, it has everything to do with the people who can't wait to call machines people. And more often than not, those people aren't really up on how machines work.
Computers will never, ever be human. They may someday hit higher intelligence, but they'll never achieve consciousness or humanity. The sooner we accept this, the sooner we'll stop creating bleak futures where the machines take over, the sooner we'll stop getting caught up in ridiculous arguments about machine personhood, and the more likely we will be to advance the technology to its fullest, thereby getting the most out of it.