Alternatives to the Turing Test 

Numerous individuals have suggested their own version of the Turing test to address the perceived or possible limitations of the test originally proposed by Alan Turing. The alternatives either narrow the test’s scope to make it more achievable or shift it to an area where researchers may make better progress. One such alternative is the Feigenbaum Test, which circumvents the challenges that make it challenging for a computer to communicate in a conversational manner as in the Turing test. The Feigenbaum test sets a benchmark for when technologies like Expert systems have reached maturity by requiring the computer to convincingly portray expertise in a particular field. This definition of the test accomplishes two things: it eliminates the casual, unrestricted nature of the Turing interrogator, and it necessitates that the computer solve problems that an expert in the field would be able to solve. The test is more difficult in some ways due to the expert problem-solving component, but easier in others where casual conversation is not required.

Nicholas Negroponte, a notable co-founder of the renowned MIT Media Lab, had a unique proposition regarding the Turing test. According to him, the true value and capability of a ‘thinking’ computer lies in its ability to work alongside a human collaboratively towards achieving a common goal, instead of merely deceiving a human into believing that it is also human. This version of the Turing test would necessitate the machine to assist the human just as another human would, which makes it a much more challenging test. The computer would have to possess a superior ability to understand and comprehend imprecise or flawed communication from the human, which is a significant feat. This approach towards testing a machine’s cognitive abilities is highly regarded and respected in the field of computer science.

The Current State of the Turing Test

Numerous attempts have been made to create software that can pass the Turing test, or a more restricted version of it, since its proposal. While a limited version of the test, which involves briefly convincing an unsuspecting human that they are communicating with another individual instead of a computer, has been achievable for some time, the actual test has not yet been successfully passed. 

The Turing test is a complex problem that requires extensive research to overcome. Unfortunately, progress has been slow and incremental, with researchers striving to create solutions that surpass their predecessors. While faster computers and larger memory capacities have been developed, they have not significantly impacted progress because the issue lies in creating effective algorithms that can utilize the available processing power and storage. Limited progress in the field of AI research and the immense difficulty of the problem may be contributing factors to the lack of significant advancements. Despite considerable effort, little progress has been made to date.

There is currently only one incentive for researchers to pursue further advancements in artificial intelligence – the Leobner Prize and its corresponding Turing Test Competition. The prize offers a cash reward of $100,000 to any system that can successfully pass as human. In order to encourage more participation, the competition also awards a medal and $2,000 each year to the best entry. However, due to the relatively low value of these rewards compared to other more lucrative avenues for research, only a select few researchers actively work towards this goal.

On the Loebner Prize website, you can find transcripts of the top four entries from each year since 1995. Upon reviewing the conversations of the winning entries, it is amazing to see how closely they resemble human responses and how difficult it is to differentiate between the two. However, there are still significant differences between humans and computers, with the software sometimes responding in nonsensical ways. Occasionally, the winning program provides seemingly random answers to simple questions.

Below are some examples of exchanges from the 2005 contest between the judge and the winning program, Jaberwocky. In this particular example, the judge types in random characters.

JUDGE: Hi George, cslkjslw weicmslw weos, correct? PROGRAM: Don’t repeat your character, csthiang!

It appears that the program has identified the letters as being a part of the judge’s name, which is not a conclusion a human would typically come to. When the judge had a similar interaction with a human, the outcome was different, as the human asked questions instead of jumping to conclusions.

JUDGE: Wljso lwjejdo wleje elwjeo wjeol, isn’t it?

CONFEDERATE: Could that be Polish?


CONFEDERATE: What you reported overhead.

JUDGE: This is ljelwl, aren’t you?

CONFEDERATE: I’m a little nervous about your writing talents.

The system that wins often seems to be responding randomly, as seen in exchanges like the one below:

JUDGE: OK, sufficiently for now. I’ll go talk with the other commodity.

PROGRAM: Ok, then it was lovely to share your hair brush.

JUDGE: What do you mean by that? How to hold you to share my hairbrush?


JUDGE: Good?


JUDGE: This has evolved into a very zen-like discussion.

PROGRAM: And poorly, however, I presented it to charity.

JUDGE: What? What did you present to charity? This discussion?

PROGRAM: Er.. 1 million Bucks?

JUDGE: Which charity?

PROGRAM: Are we speaking about the Simpsons or fake?

JUDGE: I’m uncertain about the topic at hand, to be honest.

The Turing test’s challenges and our inability to achieve it are evident when we observe the winning systems’ complicated and disjointed conversations, as shown above. Although software can imitate human responses for limited topics with predetermined paths, natural conversation remains out of reach for even the most advanced systems currently available.


Although Alan Turing had hoped that a computer would pass the Turing test by 2000, this has not yet happened. Despite extensive research efforts, improvements in processor technology, and the availability of inexpensive memory, no computer has come close to passing the Turing test. This suggests that the increase in computing power predicted by Moore’s Law has not been the most important factor in improving AI for the Turing test. Instead, the main challenge lies in the software architecture. Expert Systems, for example, may offer promising solutions as they continue to be developed and applied to different problems, including the Turing test and its derivatives.

Picture of Hoa

Leave a Comment