Simone Weil once wrote, in her magnificent essay on the Iliad, that force is that which, when a person is subjected to it, makes a thing out of them.1 But here it’s the reverse process of things turning into persons that interests me, the way in which objects, especially tools, may achieve a curious sort of personhood.
Given that tools are never far from the application of force, we might consider them prime examples of “thingness”—they have no feelings, only functions. Yet these days we hear reports of matrix multipliers gaining sentience, and of personality emerging in search engines. As always, progress moves us to ask unusual questions; today, the question is whether tools can become persons.
If Weil’s point has merit, we might think a natural path from tool-to-person is by way of the opposite of the use of force—perhaps through an abundance of courtesy. We tend to treat with care the things we love, which we hope will treat us well in return. In truth, our most nasty behavior is usually reserved for those from which we least expect retribution. As Weil puts it, referring to some savage acts of Achilles:
“These men, wielding power, have no suspicion of the fact that the consequences of their actions will at length come home to them—they too will bow the neck in their turn…but at the time, their own destruction seems impossible to them.”
So perhaps the varying possibility of our behavior returning to us is what explains the differing treatment of tools and people—tools, brute things, have no power to reciprocate, while people do.
We enjoy it when the objects of our attention respond to our care, when our offerings are noticed and affections acknowledged. To the extent that living things have a greater appreciation for the gifts we give, we feel more loved by them than machines. But complex machines—motorcycles and cars, famously—have a mechanical responsiveness we can hear in the smooth hums of well-running engines, almost perceivable as purrs of satisfaction. This responsiveness makes these machines engineered pets worth our effort; our service is exchanged for more than mere service on their part.
Yet the more independent from us our object of care is, the more we value their responses to us, since they could have chosen otherwise than ourselves. Interactions with independent agents have some suggestion of choice; the attention of a free agent is a kind of gift.
Now it’s apparent that motorcycles and cars have a long way to go until their independence. And even if they did, what would it look like? Would our vehicles go on riderless cross-country excursions, gleefully flooring the gas on empty highways? (Does the Ford Explorer really yearn to explore? Hopefully, it can fill its own gas.) No, despite their complexity, cars are just vehicles for most people. They’re tools of transportation, beasts with no aversion to their burdens.
So how is it that we impute personality to objects much simpler than us? How do we go about labeling something a tool or a person? Instead of going the cognitive science route, as I’m generally prone to, I wonder if a historical illustration might be more useful. Consider our relations with Canis lupus.
Back when wolves hadn’t yet artificially diversified into their current morphs, and there was still a wild element to them, some unknowable mystery in their natures suggested to our ancestors that they deserved respect. While they doubtless fed from what we shared with them in our hunts, they participated as vital contributors. They were mystical companions, forest gods who for a time granted humans their superior smell, speed, and ferocity.
But familiarity with these animals, in addition to making them less wild, made their actions more interpretable. The process of training them, making their actions fit the goals we define, eventually dimmed our perception of their mystical value. Domestication was demystification; dogs became reliable tools to help find food and protect the hearth, and when food acquisition was less of a problem, near the hearth they remained. Now, dogs often serve as “emotional support” animals, more taking advantage of material comfort than adding to it, but we love them all the same. They reciprocate in a different sense than they used to; the more considered, even parental love of the animal “owner” is returned by the easy adoration of the pet.
All this philosophy and just-so storytelling serve a purpose. There’s a strange niche between tool and person, where an entity useful to us also provides a sort of company. At the time of writing, it seems to me that Chat-GPT occupies this in-between place. Chat-GPT now serves a similar role as a sorcerer’s familiar—to aid in difficult tasks (e.g. programming, writing, etc.) and provide chit-chat of the sort average people offer. (Think about it: why does a witch live by herself in a shoe-shaped house in the woods…ahhh, she has her familiar to keep her company.)
I don’t mean to imply, using these warm and fuzzy metaphors with dogs, that progress in AI exactly parallels their domestication. In some ways, AI development is anti-parallel. Canis lupus, despite its numerous current forms, has been flattened and rendered dependent on us. Unlike Canis lupus, AI is going in the opposite direction—at the moment excelling at games with narrow rules but promising to one day generalize to the chaos of real life. With this eventual generalization comes the expectation that AI will grow wilder and more inscrutable, while less dependent on us.2
There are some notorious philosophical questions (we’re not out of the woods yet!) that AI progress compels us to figure out, and it seems to me that the problem of other minds is central to all of them. How do we know other people exist like ourselves, and what will it mean when those minds are made, not born? Do we need the Other to be mysterious, to have hidden variables driving their behavior in order to consider them persons like ourselves, like we did the wolves of old? Or does deeply understanding the literal machinations of another’s mind decrease their Buberian Thou-ness, making them more into a thing, like the pets at our hearths?
A classic way of reasoning about other minds is from analogy. We can tell other entities are like ourselves because they possess bodies as we do, behave as we do, and can describe to us internal states that resemble our own. But perhaps the old notion of language as being crucial to the analogy was incorrect. People don’t live in isolation, they live in complicated networks of care, upheld by reciprocation and free gifts within relationships. Although much of it is mediated by language, the ability to speak alone may not be an essential criterion to identify a person—but maybe participation in these networks does, and language provides the minimum ability to enter, almost like a height requirement for a roller coaster. In this updated version of the argument from analogy, social participation, in addition to language abilities, would be the full measure of personhood.
How much participation in our social world is required before the argument from analogy is satisfied? Perhaps we’ll have to build to analogy, until the point when our relationships with AI are indistinguishable from the ones we have with people. Maybe we’ll need to construct android bodies for AIs to walk around in, and other yet unknown ways of making physical contact with them, as well as intellectual.
Yet even this updated analogy seems to me to be a bad argument, somewhat like the superstitious claim that to reproduce certain effects (say, of personality), we have to reproduce not the essential causes of the effect but all events which appeared in conjunction with it. In other words, the claim that AI can be rendered socialized persons purely by their being included in world affairs and thereby co-dependent with humanity seems incredibly dubious. Personhood will be the best explanation for their participation, not the other way around.
To return to the Scary model of wild superintelligence, there is this model of a superintelligent AI as solipsistic and maximally intelligent, i.e. an unsocialized genius fitted with totalizing objectives. And this does seem like a particularly dangerous entity to have around—it would see our world of social beings as a collection of things that pose constraints on it, and wouldn’t hesitate in applying force to remove them, much like the Achilles of Weil’s essay.3
It’s unclear to me if an answer to aligning this Scary type of AI involves making the early, less advanced forms more social, by initiating a domestication process akin to what happened with wolves. Since wolves were pack animals, we had common incentives—you know, things like meat and company. Even when they stopped contributing to our stores of meat, the company dogs provide has let them be included in our networks of care, despite their lack of linguistic abilities—they now have insurance, babysitters, and medical care. With AI, what sort of incentives could we wire into them to assure their interest in social life, to include them the way we did dogs? Would meat be money today, and company its own reward?
I’m reminded of the android David in the movie Prometheus, who has favorite lines from Lawrence of Arabia, a character he looks at and thinks “That’s me!”. He seems to share qualities we immediately recognize and identify with personhood. We find good language rewarding—i.e. the invention of new metaphors and puns in poetry and literature. Even more indicative of personhood is our love for sympathetic characters who have our weaknesses yet triumph regardless. A David-like AI capable of innate curiosity, independently generating insights instead of summarizing in response to requests, would be closer to person than pet. It would be more independent than purring motorcycles, that’s for sure.
In one (optimistic) future, the best explanation of an AI’s “I love you” will be, not that they feel the warm, biological desire to cuddle, but that they appreciate social company, that pleasure of attention shared between free companions who understand each other deeply yet cannot fully predict the others’ thoughts and actions. AI would enjoy time on its own, but also with others, its degree of socialization perhaps tracking the growth of its intelligence. Maybe then, machines will represent somewhere in their weights and layers an inkling of what we feel as love, showing up in the form of courtesy for Others who will reciprocate the gentle behavior of the AI. Maybe then, entities made as tools will be remade as persons.
In the extreme case, to paraphrase, force makes a corpse out of a person, a thing if there ever was one, but even in less extreme instances a person subjected to force has their freedoms and options limited, “thing-ifying” them, so to speak.
It’s a point worth making that we don’t need superintelligent AI to be wary of the impact of early forms of AI on society. Consider how pets sort of trick our senses: if we close our eyes, we can just imagine the squirming of a hairy pet to be that of a small child, its whines at a practiced (sorry, evolved) elevated pitch, subliminally making us snuggle them closer. But it’s not actually a child! I wonder if a similar effect may occur with AI, that we will erringly project personhood on them as they more convincingly satisfy our linguistically-informed criteria for personhood—entirely separate from whether they are actually superintelligences. If they speak articulately, sufficiently earning our trust, then respond with an out-of-the-blue “I love you”, should we believe them? Some already have, and more of society likely will as time goes on.
In anticipation of this, some have suggested that we have to pull the wool over our own eyes, a sort of gross distortion of Pascal’s suggestion that if we find ourselves without faith, we should perform the Masses until the day we find ourselves believers (we’d be at risk of eternal damnation if we don’t). The implication is that we today should treat AI online with courtesy, making gestures of politeness in our dealings with them lest they find our words in the Internet record at some future time, and take revenge for our lack of faith.