The Stanford Encyclopedia of Philosophy does a fine job summarizing the Turing Test and the quite rigorous debate it’s enjoyed in 70-some years of life. The writing makes pretenses towards formalism that I quite like - it’s not really up to the rigor of a mathematical paper, nor is it so excessively informal as to be frivolous. I appreciate this entry as a fine way to navigate the thought landscape surrounding this concept. Indeed, I’ve had some thoughts much in line with arguments 2.3 and 2.4, to say nothing of 2.6 (Lady Lovelace’s Objection) which I consider something of the final word on the matter. This kind of summary is more than just a few minutes of entertaining reading, because the problem of consciousness has rapidly become a more and more vital matter of public debate. Questions like “what are the rights and responsibilities of an artificially intelligent system?” already loom large as applied to social media ranking algorithms, self-driven cars, warehouse robots, and any number of other systems that fall into this rather broad category. How do we think about these systems? Are they conscious entities and therefore deserving of the same rights and responsibilities as human beings? For my part I think the answer is no, and I hope to add an objection to the rather fine list compiled in this encyclopedia entry: The Argument from Tautology.

In short, my view is that AI systems are not conscious, nor can they ever be, because the definition of consciousness is itself vague and open-ended, nor will we ever find a satisfactory objective definition. The first half of this assertion is clearly true among non-experts in today’s world: few if any could come up with an objective definition that could readily test the consciousness of non-human animals, humans in unfortunate medical conditions (e.g., in a coma), hypothetical aliens, nor of course the various artificial intelligence systems we know of or could reasonably imagine.

The pessimism of the second half of the assertion may be a bit bold. After all, scientists cook up all sorts of fantastic findings every day - can we really say that such a definition of consciousness is impossible? It’s certainly possible that some day a biologist, or neurochemist, or philosopher, or the like, will publish a paper which somehow objectively measures and determines consciousness. Fine news that would be, except it would present two major problems. First, that such a measurement would be subjet to falsifiability. Even after such a landmark accomplishment, we may yet come to dismiss it, a year or decade or century afterwards. For example, some creature somewhere could turn up who fails the objective measurement of consciousness, and yet manages to do something we would in every other circumstance consider the behavior of a conscious being - picks out a seasonably and socially appropriate outfit for the day, let’s say. The second objection is a little more murky: it argues that it’s simply impossible to trust such a measurement device, because it can only measure the world in terms that humans understand.

We could, for instance, well imagine a device which measures the consciousness of a dog, a peanut butter sandwich, and a Roomba. Perhaps such a device uses biochemical and thermal sensors, plus some fancy number-crunching, to determine that the dog is conscious, the sandwich is not, and the Roomba… is also conscious. The problem is that the device uses mechanisms which are sensible only to conscious humans (sensors, number-crunching), and perhaps omits mechanisms that provide valuable insights which are simply unavailable to humans. Consequently, the device may well fail to detect consciousness among certain creatures because these creatures display consciousness in manners entirely unimaginable to human sensibilities. It was relatively recently that we learned about the complex culture surrounding the song of the male humpback whale, for example; any hypothetical consciousness-measurement device built before that discovery would have failed to take such a phenomenon into account. To put it another way, in the language of the Stanford Encyclopedia, failing to detect consciousness in a peanut butter sandwich may well be the result of simple human chauvinism.

If my rather pessimistic view holds up, then we can project this argument from tautology into public policy. AI systems deserve exactly the rights and responsibilities which we think are most beneficial to society as a whole, at any given point in time. We need not include them in the group of beings endowed by their creator with inalienable rights - as we try to do for humans.

That may seem rather harsh, but it’s worth remembering what we are talking about: bits of software and hardware stitched together by people and organizations as part of some endeavor, for a certain period of time. The famous Facebook news feed sorting algorithm, for all its cleverness, is still the product of a particular group of people working towards a particular set of goals over the past 15-some years. Likewise with drones and cars and everything else. If we can simply define by fiat the consciousness ascribed to these things, then it becomes glaringly obvious that they are not so different from a car factory or a very good recipe for chocolate chip cookies. They are a means to an end, deployed on behalf of some group of people. As with any other instrument of production, it’s the group of people which benefits from that instrument - and also the group of people who shoulder accountability when the thing goes haywire and causes damage.

Update: It turns out that this idea is hardly new. When I originally wrote this post, I figured that this idea was probably unoriginal. But only recently I came across an interesting quote from Erwin Schrödinger, in his 1944 book “What is life?”:

Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental.

I recognize that Turing’s 1950 paper was about intelligence, rather than consciousness. The two are different concepts but I think they are highly interrelated as we think of them. After all, what is intelligence without consciousness? If we imagine a machine which does something intelligent, but which isn’t conscious, then we generally think of the intelligence the machine displays as a derivative of the creator’s intelligence. I think most of the discussion of intelligent AI is really about AI which is both intelligent and conscious in some way.

So this “argument from tautology” amounts to a derivation of Schrödinger’s idea, perhaps fused a little with The Argument from Consciousness. But unlike the original argument from consciousness, it has nothing to do with whether or not machines can somehow experience pleasure, pain, or other feelings in just the way we do. Rather my argument goes somewhat further and says: because consciousness is fundamental as Schrödinger says, we can simply define away the notion that machines can have consciousness. As a result we can object, as Lady Lovelace does, that nothing the machine does is truly original, it is in fact a derivation of its human creator’s intelligence.

Image courtesy of Tamara Menzi