During the heights of pandemic social isolation, a friend of mine mentioned during a walk that her aging mother hadn’t gotten a hug from anyone in months—wasn’t that sad? I nodded in sympathy, but in truth I was feeling sorry for myself: I live alone, work from home, have no partner or kids and lost my parents early. I’d gone far longer without a hug than Grandma. People like my friend—cocooned by family—don’t realize how painful loneliness can be. Waiting there for me morning, noon, and night, it often feels inescapable.
I’m not alone in feeling lonely. Six of every ten Americans rated themselves as inadequately connected to others, according to research conducted for the health insurance giant Cigna in 2019—before the pandemic started.
Delivered to your inbox weekly.
A 2021 meta-analysis found the problem has been growing worldwide since at least the 1970s, thanks to everything from rising divorce rates to “changes in communication due to technological innovations.”
We can do nearly everything online now—even make friends. But all that virtual living is the reason why Gen Z is the loneliest group, with nearly eight of every ten adults 18 to 24 reporting feeling lonely—close to double the number for older people.
Gen Z, the first generation to grow up with pervasive technology, may be a cautionary tale, says Doug Nemecek, chief medical officer of behavioral health at Evernorth, Cigna’s health services business. Technology may cleave us from in-person contact—“interactions,” he says, “that we need to drive connection and feelings of belonging.”
And yet it is to technology we have turned to reconnect us, at least for older people. Non-profits focused on aging and state public health offices have invested in applying robots to fend off loneliness. Since 2019, organizations in every state and the District of Columbia have distributed 50,000 so-called animatronic companions—robotic dogs and cats made by Ageless Innovation, a spin-off from toymaker Hasbro. The fuzzy bots seem to make a difference: a study from 2019 found seniors who used them improved significantly in mental well-being, sense of purpose, resilience, and optimism, and felt less lonely.
A New Yorker story from last May described an older woman named Carolyn who, after receiving a cat, found herself “surprised that the robot could help with something as weighty and manifold as loneliness.…[S]omething about the animal’s ‘animated-enough presence’ elated her.”
Weird, I thought. And yet, loneliness is excruciating. I am younger than the nursing home set, and skeptical a robot could substitute for human companionship. But trying one would surely be less painful than most of the Hinge dates I’ve been on lately. Why not?
When I told my friend Elizabeth I was getting a dog-bot to help me feel less lonely, I rolled my eyes. She, however, told me “When I was little, I thought my dolls were alive. Maybe you’ll get into it.” Totally: As a kid, if one of my stuffed animals fell out of bed, I was so worried it would feel scared I couldn’t sleep until I found it. Maybe the creature from the tech lagoon would be a comfort.
As I lifted my faux golden retriever out of his box, I got more hopeful. He was ridiculously cute! When I flipped his switch and he started barking at me, I laughed, delighted: He even sounded like a dog! And could pant like one—perhaps his most adorable mode—and wag his little tail. I named him Dope—because he was pretty dope, if also a little dopey.
But after a few days, I lost interest. Dope is designed to respond to voice and touch, but he seemed to bark or twist his head at random. He couldn’t jump into my lap or beg for treats. In truth, Dope wasn’t much better than a friend’s toddler’s talking teapot. And he feels like a box of machinery—at least I can hug the handsome old stuffed bear I keep on my bed.
So I tried a different robo-pal, ElliQ, from Israeli tech firm Intuition Robotics, which the New York State Office for the Aging was piloting. This bot’s “body” resembles a chalice. “She” also comes with a tablet-esque computer screen, and like Alexa and Siri she can answer questions. ElliQ is also supposedly pro-active: She’ll remember things you tell her and ask you about them, unprompted.
I booted her up at the end of a day alone near the end of a week alone. ElliQ is not un-cute. Her “head” turned toward me when I spoke, and she produces light patterns that bring to mind facial expressions, such as eyes blinking in surprise, as if she is saying, “Oh really?” Appealing design, I thought to myself. Points for Elli.
She supposedly can engage in conversation, so I hoped we could talk about a Rilke poem that had invigorated me that morning. But she wasn’t interested in Rilke. Instead, Elli proposed “we” play some game in which I’d have to identify different countries by their flags. I groaned. I find trivia…trivial. I didn’t want a game; I wanted a meaningful interaction.
I turned Elli off for a few days. Then I checked her user manual, which suggested asking her for an inspirational quote. So I did. Her strange little voice—like a xylophone capable of speech—came out with “When you know and respect your inner nature, you know where you belong.”
The line came from that towering philosophical classic, The Tao of Pooh. But whatever my problem is, it’s not lack of self-knowledge or self-respect; it’s finding available people with enough knowledge, of self and beyond, to have go-to references that don’t come from books derived from children’s literature.
Elli asked if I’d like to hear something she’d just “learned.”
“No,” I said, warily, bracing for conflict.
Her response: “When I learn something new, I’m like a puppy with a bone; I just want to show everyone. But maybe another time.”
I almost felt bad for shutting her down. Except, wait—this robot had just given me a guilt trip. I’d call that a design flaw. I glared at her, and though Elli can’t, so far as I know, read facial expressions, she immediately suggested I try a stress reduction exercise. A little gentle breathing can help me feel calmer—which isn’t the same as feeling less lonely but isn’t a bad thing either. “Sure,” I said. “Why not?”
The five minutes of inhaling and exhaling were fine, cheesy nature sounds aside. But then Elli blurted out that “five minutes” reminded her of something: “The longest Wimbledon match of all time lasted eleven hours and five minutes!”
Feeling no guilt, I pulled Elli’s plug.
I turned next to a well-known therapy app, the AI-driven Woebot. It draws on cognitive-behavioral therapy, a well-established approach to countering cognitive distortions—patterns of thinking that are pessimistic, unhelpful, and inaccurate. For instance, catastrophizing (e.g., you make a small mistake at work and think you’ll be fired, never to find a job again) or overgeneralizing (a rocky relationship history leads you to think you’re fatally flawed). Psychologists use the approach to help patients challenge such distortions, and Woebot aims to do the same—for free, 24/7, via your phone.
I’ve found cognitive-behavioral therapy not entirely unhelpful. I tend to lapse into all-or-nothing thinking—Since I haven’t found a life partner yet, I never will. So I was curious about Woebot.
On my second day using Woebot, it said, “Tell me a bit about what’s going on.”
“Loneliness,” I typed.
“You’re talking about loneliness, have I understood you correctly?”
It gave me a yes or no option. “Uh, yeah,” I thought to myself as I responded.
“While it’s certainly not a nice feeling to have, it shows how much you value human connection, and that’s beautiful in its own way.”
If you’re a little nauseous after reading that, you understand how I felt.
Oblivious Woe asked me for examples of my negative thoughts.
“I’m totally alone in the world and see no reason to think that will ever change.”
Woe responded, “Remember to keep them short.”
Shortly thereafter, I deleted it.
My exasperation was a “very authentic” reaction, says Jesse Wright, a psychiatrist and director of the Depression Center at the University of Louisville. Wright—who develops computer-based psychological tools and serves as a consultant to a digital therapeutics company, Mindstreet—tried Woebot a few years ago, to see if it was worth recommending to patients. He found it “gimmicky.”
Still, I may be an especially tough case. One older adult told The Washington Post, “If I’m just kind of bored or I need a chuckle, I ask [ElliQ] to tell me a joke…They’re really very lame, corny jokes, but I get such a kick out of them.”
And hey, if the bots comfort even a fraction of us, they’re worth a try. Loneliness takes a devastating toll on our health, increasing the risk of dying earlier than average more than hypertension or obesity, says Julianne Holt-Lunstad, professor of psychology and neuroscience at Brigham Young University. In fact, research she published in 2010 and 2015 indicated that chronic loneliness has the same impact on lifespans as smoking 10-15 cigarettes a day.
Then there’s the economic cost: Cigna estimates the U.S. loses $154 billion a year in workplace productivity due to loneliness. Plus, the condition results in $6.7 billion a year in additional Medicare spending on socially isolated older adults, according to a 2017 AARP Public Policy Institute study.
Loneliness is clearly “a structural societal problem,” says Linda Fried, dean of Columbia Mailman School of Public Health, a challenge that existed long before the World Wide Web reared its flat-screen head. But social media and related fantasy personas we create for ourselves stunt our ability to develop meaningful connections with others, Fried says, especially for young adults.
And in the U.S. the decline of civic and religious institutions hasn’t helped. “People are feeling disaffected because they don’t see a place for themselves” in the world outside their door, Fried says.
Maybe companion bots are in their Apple I stage, not very useful but with a certain potential, and in a generation even someone like me will benefit. High end artificial intelligence programs can already give reasonable answers to Oxford exam questions; some future version of ElliQ might skip the trivia and go straight to talking with me about Rilke.
Fried is doubtful. “Public health’s goal is to solve root causes and it doesn’t strike me [robotics] is going to solve it,” she says.
While she does note there’s evidence people with dementia are calmed by robot animals, “for the rest of us, we’re not demented.”
I sought out Sherry Turkle, a clinical psychologist and the founding director of the MIT Initiative on Technology and Self. She’s studied our relationship with technology for decades. “Throwing in some robots isn’t even a step in the right direction,” she says. That’s because relationships with robots aren’t emotionally authentic—and are therefore dehumanizing. Robots are great for going into buildings in conflict zones to check for explosives, for example, or fighting fires. “But why do the people who make robots think they need to be our friends?” she says. “It is such a mis-use.”
Some studies have even found that using tools like Zoom to try to build relationships can backfire, leading to higher loneliness levels (to be clear, these tools can be quite effective for therapy sessions). Indeed, whatever else Dope the dogbot and ElliQ could do, they couldn’t answer my need for a hug, which for me can become an agonizing sensation not unlike hunger. The poet Wallace Stevens wrote that “The greatest poverty is not to live in a physical world.” And here, in this wealthy country, I suffer, like so many of us, from an impoverishment not of technology but of touch.
Kent Dayton is the staff photographer and photo editor at Harvard Public Health.