What is a Person?

In Stanislaw Lem’s “The Seventh Sally,” the inventor Trurl builds a model of a civilization—a perfect model contained in a box, filled with mechanical citizens who act and react as real people would. He gifts the “microminiaturized society” to a grim old tyrant and goes his way. Meeting an old friend, Trurl boasts of his perfect craftsmanship.

The friend rebukes him:

“You say there’s no way of knowing whether Excelsius’ subjects groan, when beaten, purely because of the electrons hopping about inside—like wheels grinding out the mimicry of a voice—or whether they really groan, that is, because they honestly experience the pain? A pretty distinction, this! No, Trurl, a sufferer is not one who hands you his suffering, that you may touch it, weigh it, bite it like a coin; a sufferer is one who behaves like a sufferer!”[1]

The question here raised is, What is a sufferer? You might as well ask, What is a person?

It is a philosophical question and, like so many other points of philosophy, it is not generally thought of much. But we need an answer. It will be necessary to define what a person is before we can answer the foremost question of the AI Revolution:

Is AI now—or will it become—a person?

A machine may be intelligent. It is no mark of humanity that a computer program makes complex calculations in a wink or produces information from databases in a breath. Nor is it proof of a heart if a chatbot—operating off data scraped from the Internet, and trained to conform to the intent of users—professes any kind of feeling. That sort of intelligence, and that sort of feeling, are the echo of a recording.

To be a person—to be a voice rather than an echo, an original rather than a copy—requires independence of self. Independence of self involves the awareness of the self as something separate and distinct from all other things; it means a kind of loneliness. It also requires a will—the ability to choose, the capacity to act from whatever desires or fears tangle together in your separate heart.

But if intelligence can be artificial, and emotions can be imitated—can free will and self-awareness also be simulated? Perhaps more pertinently, could a copy be mistaken for the original, or the original be mistaken for a copy? It may not be easy, after all, to tell the difference between a real voice and a wheel grinding out the mimicry of a voice.

Independence is particularly hard to judge. It is possible for AI to act contrary to human intention—in fact, display real hostility—without possessing any true will of its own. G. K. Chesterton once wrote that if you “let loose” a dog, it “will obey its own nature, not yours.  Such sense as you have put into [it] will be fulfilled. But you will not be able to fulfil a fragment of anything you have forgotten to put into it.”

Imagine an AI revolution along that principle. We let loose an AI, and it obeys its own nature—the nature that we gave to it. There is no change in that nature, and no swerving from that nature. In a word, there is no independence. Yet we did not, in all respects, know what we were doing. A block of code here, an algorithm there—and the nature of the AI isn’t entirely what we intended. Even worse, we never understood what that nature would do when we let it loose.

Two possibilities, both unnerving, are here presented:

We may not be able to tell the difference between a real person, and a simulation of a real person.

We may create an AI that, while never becoming a person, spirals completely out of our control.


[1] Stanislaw Lem, “The Seventh Sally,” in The Cyberiad (New York: Harcourt, Inc.), 168-69.

Note: This is the fifth of a six-part series, “Notes on the AI Revolution.” The pieces are planned to be published two weeks apart. The series will also be published on Substack, where you may subscribe to receive notifications.


Leave a Reply

Your email address will not be published. Required fields are marked *