One of my concerns in recent year is all the hype around AI. My view is that it’s a tool like any other, not an autonomous entity with its own agency. We all like to joke about robots taking over the world, but the fact is that AI is a tool deployed by groups of people to do things in the world - for example, to sell you stuff.

Nothing wrong with that as such, but when we talk about AI as though it’s autonomous, we remove accountability from the people who actually are using AI; it’s not Coca Cola (let’s say) trying to get you to buy a drink, but the magical algorithms! This line of thinking is especially dangerous when it comes to non-commercial applications. Right now there are AI tools designed to guide judges’ decisions on how and whether to set bail for criminal justice defendants, for example. These tools are not magical and all-powerful robots, and they should not be given that much deference! They are just tools, developed by some company somewhere, and they are just as fallible as anything else a person or group of people do.

That is part of the reason I write my short stories - in addition to being fun to write, they also give me a chance to poke holes at AI hype, albeit in roundabout fashion. My latest short story, The Game, takes apart the idea that AI is able to do whatever a person can do. Or at least I hope it does.

In my view, one of the most fundamental things that people can do, but AI algorithms can’t, is to form new abstractions and new ideas. This view is actually a very old one called Lady Lovelace’s Objection, originally voiced by the eponymous Lady Lovelace, who wrote in 1843 that computers can’t really originate anything, they can only do whatever it is people tell them to do. I think that objection is still as valid today as it was two centuries ago, and it will continue to be valid for the foreseeable future.

I give this idea a new spin by examining the problem of naming. I think naming something is a very simple form of abstraction. It’s taking an concrete phenomenon (like, here’s this shaggy four-legged thing which seems to want to nap all the time) and referring to it by a short phrase (“dog”) that seems to fit and can be used by other people.

My view is that AI can’t name things, or understand names. Understanding names, and giving new things new names - these problems are too complicated, too murky; they rely on too much undigitized context. People can certainly give AI hints about how to name things and how to understand names - we can in fact give AI a lot of hints - but that does not mean AI is capable of doing new things in the absence of relevant hints.

That, in a nutshell, is the goal. I hope it worked out!