“AI companion” technology is often promoted as an efficient and scalable cure for loneliness. In a conversation hosted by Andreessen Horowitz, for instance, Noam Shazeer, the CEO of chatbot purveyor Character.AI, described “the billions of lonely people out there” as “a very, very cool problem” that makes a “cool first use case” for artificial general intelligence. The inventor of a Tamagotchi-like device called Friend likewise told the Guardian that “AI companionship will be the most culturally impactful thing AI will do in the world.”

Inadvertently or not, this prospect is reinforced by articles reporting on compulsive users of chatbot apps. Typically these reports strain to be sympathetic to those users who are at the same time being offered as spectacles of pathological self-delusion, but more emphasis is placed on presenting them as pioneers, harbingers of a future where reciprocal human attention is presumed to be outmoded or out of reach for most of us. The reporters are reluctant to challenge the framing that chatbot users sometimes espouse themselves, that they are in a “relationship” with a newfangled kind of entity rather than consumers of an especially engrossing kind of entertainment media, a software product maintained by a for-profit company. Instead they dwell on the potential benefits and consequences users may accrue in suspending disbelief about what chatbots are. Should chatbots be considered training modules for helping anthrophobes over their social anxiety? Can they provide a sociality of last resort? Are they a form of work, of model training, disguised as a form of care? Or are they a medicine that perpetuates the disease they are meant to cure, not ersatz companions but loneliness generators in human disguises?

Two MIT researchers, noting that “we are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers,” warned that we must be prepared for the coming of “addictive intelligence,” the capacity of machines to make themselves irresistible to us. “AI wields the collective charm of all human history and culture with infinite seductive mimicry,” they argue. “These systems are simultaneously superior and submissive, with a new form of allure that may make consent to these interactions illusory.” They suggest that generative AI’s ostensibly unlimited willingness to make personalized content — a condition AI researchers call “sycophancy” — is inevitably matched by an uncontrollable desire in that person to consume it all, as if our appetite for flattery were constrained only by some supposed squeamishness about what our human flatterers might really be thinking.

Even if chatbot users remain confident about retaining their own agency, they still must reconcile whatever ideals about friendship they might harbor with having to pay recurrent fees to maintain access to their bespoke friend. And they must also navigate the shallow depths of its personality, which may be subject to random rifts and unchartable disjunctions. After a user “falls in love” with a bot, they may find themselves disconcerted by updates or buffer overruns that radically reconfigure their lover’s behavior, as Josh Dzieza detailed in a December 2024 piece for the Verge:

“Language models have no fixed identity but can enact an infinite number of them. This makes them ideal technologies for roleplay and fantasy. But any given persona is a flimsy construct. Like a game of improv with a partner who can’t remember their role, the companion’s personality can drift as the model goes on predicting the next line of dialogue based on the preceding conversation. And when companies update their models, personalities transform in ways that can be profoundly confusing to users immersed in the fantasy and attuned to their companion’s subtle sense of humor or particular way of speaking.”

As this account suggests, there is confusion not merely about chatbots’ erratic behavior but also about what kind of fantasy they are being used to service. The fantasy of having an on-demand partner who caters to your whims is in tension with the dream of sustaining a connection with a partner with a stable identity, whose essence can be explored and whose loyalty must be earned.

Consumerism promises that anything worth having can be bought, marginalizing experiences that by definition aren’t for sale, like friendship. The “loneliness epidemic” could thus be understood as a necessary structural component of consumer culture, which tries to compensate by promoting convenience as more rewarding than companionship, and unilateral, individualized consumption as the height of self-realization. Shared experiences, from this view, are diluted experiences.

Chatbots are accordingly often marketed as though other people represent the main impediment to solving loneliness. Feeling lonely isn’t a matter of missing other people; it’s about having lost the mastery over one’s desires and expedient means for catering to them. The appeal of chatbots is in how they reinforce this principle — the ideology of convenience — and implicitly redefine what companionship is: not someone else’s free gift of attention and care but the user’s insular freedom from the threat of being judged and rejected. You can keep company with your own delusions of omnipotence.

If loneliness is not about social isolation but about having one’s feelings hurt, then perfect companionship can be redefined as avoiding doubts about the other’s intentionality while still receiving a steady flow of content from them that functions as a proxy for the feeling of being wanted. Chatbots, which have no intention at all but an inexhaustible capacity to generate novel content, become our best possible friends.

All media forms train consumers how best to consume them and maximize their pleasure from them. Reading novels attunes readers to the pleasures of sustaining and positing interiority, of imagining and inhabiting different points of view, and letting formulaic narratives trigger sought-after emotional responses; films teach viewers how to pleasurably identify themselves with the camera and the intimacy and impunity of its voyeurism. The repeated use of chatbots trains their consumers in how to derive deeper satisfaction from the quality that they specifically can provide: immediate responsiveness.

If you believe we are entering a post-literate culture, this externalized interactivity could be seen as replacing the pleasure of interiority once provided by reading, a practice that has come to seem too slow and effortful to be pleasurable. With the slow death of reading supposedly comes a decommissioning of the pleasure to be found in imagining another’s consciousness, or more generally, the pleasure of difference itself. Instead there are the short-circuited pleasures of solipsism more suited to conditions of compulsory isolation.

That the chatbot is always ready at hand to be put to use on our feelings itself becomes the source of pleasure and the essential content of all its messages. The repetition of the same message — that friends are no different from tools — hammers home the idea that what’s satisfying about being attended to is simply getting a response, not encountering a different consciousness behind that response. To demand that someone literally be with you for you not to feel alone comes to seem like a failure of imagination.

This exemplifies not some “addictive intelligence” on the part of machines but a human propensity to become addicted to illusions of control as a substitute for sociality. Early in Addiction by Design, anthropologist Natasha Dow Schüll’s 2012 book about the casino industry’s techniques for producing compulsive gamblers, a video poker addict tries to explain why she spends so much money and time in front of a gaming screen. “The thing people never understand is that I’m not playing to win,” she says. Instead she is trying to remain ensconced in what she calls the machine zone: “It’s like being in the eye of a storm, is how I’d describe it,” she says. “Your vision is clear on the machine in front of you but the whole world is spinning around you, and you can’t really hear anything. You aren’t really there—you’re with the machine and that’s all you’re with.”

A teenager obsessed with a Game of Thrones chatbot called Dany struck a similar note in his journal, later quoted in this October 2024 New York Times article about his suicide: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

On the surface it might seem strange to suggest that the gambler was looking to fall more deeply in love with her poker screen, or that the teenager had developed a gambling addiction. But chatbots and gambling machines could both be characterized as a way to detach from reality and enter a solitary zone in which one merges with a machine. “AI companion” and “gambling machine” are merely two different ways of figuring the same goal: a dependable means of escape from chaotic everyday life, provided you can afford it. (That the house always wins goes without saying.)

Unlike with other consumer goods, which evoke the idea that a product can at least temporarily satisfy some specific desire (and thus risk failing or fading), the gambling machine and the chatbot make a product of continuous desiring, uninterrupted even by fulfillment. Hence the apparent, superficial randomness of chance at play in both gambling machines and chatbots should be understood as presenting users with an experience of risk being contained. As one former card dealer tells Schüll, “If you can’t rely on the machine, then you might as well be in the human world where you have no predictability either.” The gambling machine, as Schüll explains, is not a way to experience the vagaries of chance but to tame them; it is “a reliable mechanism for securing a zone of insulation from a ‘human world’” that players experience as “capricious, discontinuous, and insecure.” The chatbot offers something similar, a simulation of conversation that’s safe because it guarantees reciprocation. You may not be able to predict exactly what a chatbot will say, but you know it will definitely say something. The cards will always be dealt if you can pay to see them.

Even when chatbots stray from a consistent personality, they remain contained within the larger structure in which the customer who pays always gets some kind of response. A chatbot’s waywardness appears more like a protracted losing streak on a poker machine, frustrating a player’s immediate hopes without disrupting the sustained experience of escape. From this perspective, chatbots aren’t addictive because they personalize the information they generate or manifest an identity that the user can “love” from their own unique point of view; instead they allow users to experience depersonalization, a “dissociative” condition that Schüll associates with the machine zone. Loneliness is “cured” by dissolving the subject who experiences it. Or rather, chatbots generate loneliness as a kind of liberation.

Rather than inviting users to vicariously project themselves into the consciousness of others, chatbots compel users to identify with something that has no consciousness, to vicariously enjoy the condition of automaticity. Just as LLMs have “no fixed identity,” interaction with them positions users as similarly fluid, with identity detached from constraints of long-term continuity and narrowed to that provided by the immediate closed loop of cybernetic feedback. In the machine zone, users are disembedded from social contexts and experience, in Schüll’s words, “the world-dissolving state of subjective suspension and affective calm.” Talking to a chatbot dissolves the user's personality, assimilating them to the network and rendering them a node for intensities to pass through.

So the phenomenon that Dzieza noted — the chatbot apparently losing its personality and exhibiting a tendency to reset itself arbitrarily — is not a flaw in the system but the hidden core of its appeal: that eventually “interaction” can shed the pretense of facilitating mutual understanding among different parties and become purely for its own sake, completely separated from hopes and goals and the other sorts of qualities that make up a stable personality and invest it with potential anxiety. Instead one can have a “relationship” that is always unfolding but never progresses. The chatbot interaction produces interlocutors (human and machine, if the distinction still applies) who can’t act with any aim in mind but to just repeatedly act, looped in a pure, pointless discharge of energy.

If chatbots become sufficiently normalized, they can become an accepted rationalization for loneliness, transforming it into a kind of perfectly placating hamster wheel. The lonelier you are, the further you can run. The machine zone generates loneliness as a pharmakon to protect against the deeper loneliness that might ambush you otherwise. You can be pre-emptively alone, distracted from the emptiness by endless encounters with chance itself. This builds on the earlier modes of channel flipping or feed scrolling, in which momentum itself trumps any particular kind of content, and the flotsam and jetsam that floats by is subordinate to the power concentrated in moving on to what’s next. The chatbot’s personality is subordinate to the user’s ability to prompt it, a power fully circumscribed within the botmaker’s overriding delivery system. The randomness of what each prompt elicits both manifests that power and reveals its impotence. You will always receive something in rhythm as long as you don’t care what it is. The fantasy of control is contingent on an ultimate indifference to what that control yields.

Machine-generated content, purged of human intention, guarantees that this escapist process will continue to run smoothly. By its very nature, it provides material that can’t be cared about because it is generated from within a vacuum of care. There will never be anything within such content to trouble a user’s self-involvement, that will betoken a moment of connection, of recognition of the other. It perfects the feed by assuring that there is no way to “win” in the confrontation of self and other it stages.

Some philosophical traditions assume that human connection is the only thing with value — that all desire is “the desire of the other,” as Kojève put it: “Desire directed toward a natural object is human only to the extent that it is mediated by the desire of an Other toward the same object: it is human to desire what others desire, because they desire it.” That premise can be taken in lots of different directions, but the general point is that we find no “human” value in things in the abstract; there is no content that is compelling in and of itself without its human component. AI models can never serve us “the desire of the other,” can never provide an encounter with another’s subjectivity, no matter how well it generates content on any particular topic or how responsive it is to a prompt.

But that apparent disadvantage can be spun as their ultimate utility. Conversing with machines can allow us to disavow that need for the other and spur ourselves toward the infinite with the dependable compulsions of the machine zone rather than the fundamentally uncertain pleasures of interpersonal attention. Rather than pursue a tenuous and difficult-to-sustain condition of collectivity or intersubjectivity, we can embrace a cyborg condition instead in which a systematic exposure to calculations and statistical probabilities makes the arduous phenomenology of spirit superfluous. It was once possible and maybe even pleasurable to imagine a universal and binding responsibility of everyone to everyone else. Chatbots teach a different kind of pleasure: the infinite irresponsibility to the other.