Although I'm not prepared to move up my prediction of a computer passing the Turing test by 2029, the progress that has been achieved in systems like Watson should give anyone substantial confidence that the advent of Turing-level AI is close at hand. If one were to create a version of Watson that was optimized for the Turing test, it would probably come pretty close.
here's a toast to Alan Turing born in harsher, darker times who thought outside the container and loved outside the lines and so the code-breaker was broken and we're sorry yes now the s-word has been spoken the official conscience woken - very carefully scripted but at least it's not encrypted - and the story does suggest a part 2 to the Turing Test: 1. can machines behave like humans? 2. can we?
Turing presented his new offering in the form of a thought experiment, based on a popular Victorian parlor game. A man and a woman hide, and a judge is asked to determine which is which by relying only on the texts of notes passed back and forth. Turing replaced the woman with a computer. Can the judge tell which is the man? If not, is the computer conscious? Intelligent? Does it deserve equal rights? It's impossible for us to know what role the torture Turing was enduring at the time played in his formulation of the test. But it is undeniable that one of the key figures in the defeat of fascism was destroyed, by our side, after the war, because he was gay. No wonder his imagination pondered the rights of strange creatures.
Alan Turing is so important to me and to the world, and his story is so important to be told, so it was a big thing to take up, and I was a little petrified. Like, who am I to write the Alan Turing story? He's one of the great geniuses of the 20th century - who was horribly persecuted for being gay - and I'm a kid from Chicago.
Like Alan Turing, Zuse was educated in a system that focused on a child's emotional and philosophical life as well as his intellectual life, and at the end of school, like Turing, Zuse found himself to be something of an outsider-to the disappointment of his very conventional parents, he no longer believed in God or religion. (Jane Smiley (2010). The Man Who Invented the Computer)
Turing attended Wittgenstein's lectures on the philosophy of mathematics in Cambridge in 1939 and disagreed strongly with a line of argument that Wittgenstein was pursuing which wanted to allow contradictions to exist in mathematical systems. Wittgenstein argues that he can see why people don't like contradictions outside of mathematics but cannot see what harm they do inside mathematics. Turing is exasperated and points out that such contradictions inside mathematics will lead to disasters outside mathematics: bridges will fall down. Only if there are no applications will the consequences of contradictions be innocuous. Turing eventually gave up attending these lectures. His despair is understandable. The inclusion of just one contradiction (like 0 = 1) in an axiomatic system allows any statement about the objects in the system to be proved true (and also proved false). When Bertrand Russel pointed this out in a lecture he was once challenged by a heckler demanding that he show how the questioner could be proved to be the Pope if 2 + 2 = 5. Russel replied immediately that 'if twice 2 is 5, then 4 is 5, subtract 3; then 1 = 2. But you and the Pope are 2; therefore you and the Pope are 1'! A contradictory statement is the ultimate Trojan horse.
John D. Barrow
In attempting to construct such (artificially intelligent) machines we should not be irreverently usurping His (God's) power of creating souls, any more than we are in the procreation of children," Turing had advised. "Rather we are, in either case, instruments of His will providing mansions for the souls that He creates.
With the increasingly important role of intelligent machines in all phases of our lives--military, medical, economic and financial, political--it is odd to keep reading articles with titles such as Whatever Happened to Artificial Intelligence? This is a phenomenon that Turing had predicted: that machine intelligence would become so pervasive, so comfortable, and so well integrated into our information-based economy that people would fail even to notice it.
In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human. It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British Science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.
So this is where all the vapid talk about the 'soul' of the universe is actually headed. Once the hard-won principles of reason and science have been discredited, the world will not pass into the hands of credulous herbivores who keep crystals by their sides and swoon over the poems of Khalil Gibran. The 'vacuum' will be invaded instead by determined fundamentalists of every stripe who already know the truth by means of revelation and who actually seek real and serious power in the here and now. One thinks of the painstaking, cloud-dispelling labor of British scientists from Isaac Newton to Joseph Priestley to Charles Darwin to Ernest Rutherford to Alan Turing and Francis Crick, much of it built upon the shoulders of Galileo and Copernicus, only to see it casually slandered by a moral and intellectual weakling from the usurping House of Hanover. An awful embarrassment awaits the British if they do not declare for a republic based on verifiable laws and principles, both political and scientific.
Information, defined intuitively and informally, might be something like 'uncertainty's antidote.' This turns out also to be the formal definition- the amount of information comes from the amount by which something reduces uncertainty... The higher the [information] entropy, the more information there is. It turns out to be a value capable of measuring a startling array of things- from the flip of a coin to a telephone call, to a Joyce novel, to a first date, to last words, to a Turing test... Entropy suggests that we gain the most insight on a question when we take it to the friend, colleague, or mentor of whose reaction and response we're least certain. And it suggests, perhaps, reversing the equation, that if we want to gain the most insight into a person, we should ask the question of qhose answer we're least certain... Pleasantries are low entropy, biased so far that they stop being an earnest inquiry and become ritual. Ritual has its virtues, of course, and I don't quibble with them in the slightest. But if we really want to start fathoming someone, we need to get them speaking in sentences we can't finish.
But the Turing test cuts both ways. You can't tell if a machine has gotten smarter or if you've just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you've let your sense of personhood degrade in order to make the illusion work for you? People degrade themselves in order to make machines seem smart all the time. Before the crash, bankers believed in supposedly intelligent algorithms that could calculate credit risks before making bad loans. We ask teachers to teach to standardized tests so a student will look good to an algorithm. We have repeatedly demonstrated our species' bottomless ability to lower our standards to make information technology look good. Every instance of intelligence in a machine is ambiguous. The same ambiguity that motivated dubious academic AI projects in the past has been repackaged as mass culture today. Did that search engine really know what you want, or are you playing along, lowering your standards to make it seem clever? While it's to be expected that the human perspective will be changed by encounters with profound new technologies, the exercise of treating machine intelligence as real requires people to reduce their mooring to reality.
Had I catalogued the downsides of parenthood, "son might turn out to be a killer" would never have turned up on the list. Rather, it might have looked something like this: 1. Hassle. 2. Less time just the two of us. (Try no time just the two of us.) 3. Other people. (PTA meetings. Ballet teachers. The kid's insufferable friends and their insufferable parents.) 4. Turing into a cow. (I was slight, and preferred to stay that way. My sister-in-law had developed bulging varicose veins in her legs during pregnancy that never retreated, and the prospect of calves branched in blue tree roots mortified me more than I could say. So I didn't say. I am vain, or once was, and one of my vanities was to feign that I was not.) 5. Unnatural altruism: being forced to make decisions in accordance with what was best for someone else. (I'm a pig.) 6. Curtailment of my traveling. (Note curtailment. Not conclusion.) 7. Dementing boredom. (I found small children brutally dull. I did, even at the outset, admit this to myself.) 8. Worthless social life. (I had never had a decent conversation with a friend's five-year-old in the room.) 9. Social demotion. (I was a respected entrepreneur. Once I had a toddler in tow, every man I knew-every woman, too, which is depressing-would take me less seriously.) 10. Paying the piper. (Parenthood repays a debt. But who wants to pay a debt she can escape? Apparently, the childless get away with something sneaky. Besides, what good is repaying a debt to the wrong party? Only the most warped mother would feel rewarded for her trouble by the fact that at last her daughter's life is hideous, too.)