What is artificial intelligence? It turns out there is a bit of a debate over what exactly defines “intelligence” and how we can know if something is artificially intelligent. There are some people who think that we have already achieved AI, citing Google’s algorithms or Apple’s Siri as examples. Critics say that these are merely data accumulators, not actual intelligence. The debate, it seems, is where exactly we draw the line between actual intelligence and being able to search through facts.

1429066-soMany people turn to the Turing Test as the gold standard for defining artificial intelligence. However, Alan Turing’s test for simulated intelligence has not worked out quite as was hoped. According to the Alan Turing’s test, which he called The Imitation Game, a program is considered intelligent if a human judge, upon having a text-based conversation with it and an actual human being, cannot determine whether he is talking to a machine or a human.

While Turing’s reasoning was sound, the problem may be with the way people are applying the so-called Turing Test. Alan Turing’s reasoning was that humans are known to be intelligent, so if a computer program is indistinguishable from a human, then we can define it as having intelligence. It makes sense to do this kind of comparison. One is a “known,” humans are intelligent beings, and one is an “unknown,” is this program intelligent? However, people seem to want to game the system by trying to beat the test rather than understanding the impetus behind making the comparison.

Last year, a computer program, going by the name Eugene Goostman tricked one-third of his judges into thinking that he was a teenage boy mainly by evading answers to questions, saying “I don’t know” or changing the subject. Another study pointed out that the best way to trick someone into believing a computer is a human is to have it make mistakes. This brings up the awkward notion that making a computer act dumber is the best way to get it to pass a test to see if it is intelligent.

Simon Parkin, writing for MIT Technology Review, mentions one critic who says that many of these computer programs that are deemed artificially intelligent, such as Watson, are really only good at one task or at a few similar tasks. If asked to switch to another task, for example from playing chess to playing checkers, it likely will not be able to do it. This assumes that the computer has not been programmed to play checkers, but would have to learn it. Critics say that being able to look up probable chess moves, or following a program, is not intelligence. They offer an alternative test in which the computer is required to be creative, such as writing a poem or a song.

Other people seem to consider accumulation of data coupled with a good algorithm equivalent to artificial intelligence. Babak Hodjat, writing for Wired, lists several industries employing AI. Many of these industries take advantage of accumulating large amounts of data, such as Google or Amazon. From this perspective, software that helps scientists find correlations, or a GPS to help users find their destination, are examples of AI.

Hodjat mentions in his introduction that often people will consider a system intelligent until it doesn’t it makes a mistake, but this assumes perfection is inherent in intelligence. Perhaps this is part of the problem with defining AI. People want to see a machine or program that is human-like, but they also want to see it work perfectly, as though AI is supposed to mirror a perfected version of ourselves.

Or, perhaps defining AI is a little more self-referential than people think. Perhaps, in true postmodern style, there is no over-arching definition of artificial intelligence. A machine is considered intelligent if it seems intelligent to you. This would be a kind of modified Turing Test for (post)modernism where values and definitions take on a subjective nature. From this perspective, AI will be defined by whatever value you give it.

We’ve already seen this kind of subjective, relational value placed on robots. Consider AIBO, the automated dog. The first generation of AIBO came out in Japan in 1999. However, by 2006, Sony needed to restructure, and the AIBO department was nixed. They did keep AIBO “clinics” for people to bring their automated dog when it was “sick,” but the company ended that program in 2014. Many people are now having funerals for their AIBO when it dies. These people ascribe value to their AIBO as though it was a real dog and some even believe it has a soul.

Spike Jones’s Her explores whether a computer program can provide the kind of companionship that a human being can, and, in a sense, making it emotionally indistinguishable from a real human relationship. Perhaps something is artificially intelligence when an individual feels as though it is as good as (or better than) a human and relates to it as a person would typically relate to another human being. Rather than the computer program tricking a judge through conversation, the program “tricks” one’s emotions into feeling the same way as one would with a human.