HEALTHCARE & MEDICARE

AI and grief robots can teach us to support grief – Healthcare Blog

Melissa Lunardi

The rise of digital grief support

We are witnessing how to deal with one of the most common human experiences: grief. In recent years, several companies have emerged with grief-related technologies, where users can interact with AI versions of their deceased relatives, or turn to general AI platforms to gain grief support.

It’s not just curiosity, it’s a response to a real lack of human connection and support. The rise of sad-centric AI reveals what is uncomfortable for our society: people are turning to machines because they don’t get what they need from the humans around them.

Why people choose numbers over human support

The sad technology industry is intensifying, with the MIT Technology Review report saying that “at least six companies in China” are providing AI services for interacting with their deceased relatives. Such as characters. This digital migration did not happen in a vacuum. This is a direct response to our current support system failure:

  • Social discomfort: Our sad society struggles with how to deal with losses. Friends and family often disappear within weeks and are isolated when support is needed, especially after a few months.
  • Professional barriers: Traditional sadness consultation is expensive and has a long wait time. Many therapists lack proper grief training, some of whom do not have grief-related education in the program. This makes it impossible for people to obtain, qualified support when people need it most.
  • Fear of judgment: People often share intimate sad experiences with AI more safely than with humans, and humans may judge, provide unnecessary advice or feel uncomfortable with the intensity of their own sadness.

Eliza Effect

To understand why grief-focused AI success, we must look back at 1966 when the first AI-Cancanion project, called Eliza, was developed. Created by Joseph Weizenbaum of MIT, Eliza simulates conversations using simple pattern matching, especially using human-centric therapy to mimic Rogerian psychotherapists.

Rogerian therapy is very suitable for this experiment because it relies heavily on reflecting what the person said. The role of an AI partner is simple: reflects the questions the person says, such as “How does this make you feel?” or “Tell me more about this.” Weizenbaum was surprised that people built deep emotional connections through this simple program and thought they were the closest thoughts and feelings. This phenomenon is called the “Elizabethan effect.”

Eliza’s work is not because it is refined, but because it embodies the core principle of effective emotional support, something we as society can learn from certain situations (or in some cases releasing).

AI and sad robots are correct

The same reason for Eliza, AI with modern grief-centric success, but with enhanced functionality. Here is what AI does right:

  • Non-judgmental existence: Artificial intelligence does not retreat from the intensity of sadness. It won't tell you to “keep going”, suggests you should “up until now”, or change the subject when the pain becomes uncomfortable. It is just witness and reflection.
  • Unconditional availability: Sadness doesn't follow work hours. It strikes at 3 a.m. Tuesday, at family gatherings, during your work or grocery run. AI works 24/7, providing instant support by quickly standardizing common sad experiences.I just saw someone who looked like my mom in the grocery store and would I get mad?transparent AI's response proves valid verification: “You're not angry at all. It's actually a very common experience. When someone close to you is sad. Your brain has connections to identify familiar patterns, especially the faces of people who are important to you…it's totally normal. Your mind is still dealing with your losses, and these recognition moments are still how deep your mom still stays with you in your memory and recognition.” Simple on-demand verification helps the sad person feel normal and understand immediately.
  • Pure focus on sadness: AI won't hijack your story to share your experiences. It does not provide advice on what you “should” do or get tired of hearing the same story repeatedly. Its focus is entirely yours.
  • No agenda verification: With humans who may be eager to make you feel better (usually for your own comfort), AI verifies emotions without trying to fix or change them. It normalizes sadness without pathology.
  • Privacy and Security: AI takes up space secretly for the “good, bad and ugly” parts. Don’t be afraid of social judgment, don’t worry about burdening someone, or say “wrong” words.
  • no strings attached: Artificial intelligence does not require emotional reciprocity. If you recover longer than expected, it will ultimately need no comfort, get tired of your sorrow or give up on you.

Artificial intelligence can do it, but humans can do it better. better.

According to an article in the 2025 Harvard Business Review, so far, AI’s use of AI in 2025 is treatment and companionship.

This tells us that there is a huge gap in the way we get along with each other when life becomes hard. No matter how precise and practical the grief robot is, almost all of us would rather care and understanding of friends, family, colleagues and community than chat with AI.

So, what can we learn from artificial intelligence that human unique abilities are something AI can never do?

  • Artificial intelligence can always be displayed, but humans can appear in the context: AI 24/7 is available and can be verified through on-site information. But humans can bring historical references. You can text “think about you” on your loved one's birthday or check in during the holidays.
  • AI can follow their leadership, but humans can read between two lines: AI reflects what people share and asks open-ended questions. However, humans can feel that “I'm fine” does not mean “I'm fine” and require more support.
  • AI can encourage repetition, but humans can weave stories together: AI can listen to the same story repeatedly without complaining. But humans can notice new details every time, and recall changes can occur over time. You can really say, “It's been a while since we last talked about your dad. I'd love to hear how your dad has crossed your mind lately.”
  • AI can provide virtual existence, but humans can provide actual existence: AI provides instant support through conversations. But humans can actually say, “I’m going to the grocery store on Thursday, what can I buy for you?”
  • AI can admit losses, but humans can respect the entire person: It is important that AI verifies someone. But humans can keep their memory alive by sharing memories and naturally speaking their names: “I remember Sarah likes spicy food. I bet she would love this restaurant.”
  • AI can respond when asked, but humans can foresee heavy sad days: When someone reaches out his hand, the AI ​​responds. But humans can offer preemptive support: “I know next week is your first Mother's Day without a mom. My schedule is just in case.”
  • Artificial intelligence can provide comfort through words, but humans can provide physical presence: AI verifies feelings through reactions. But humans can sit in a shared silence, providing a hug that can last, or simply say, “I have no words, but I am here.”

Chance

We are so hungry about the reactions and presence of empathy that we will now accept them from tools that cannot truly empathize. But what if we don’t surrender to the digital agent and use it as a mirror to see what we can’t offer?

The lesson is not about AI replacing human connections, but about AI showing us (or reminding us) what human connection looks like. Every feature that makes grief-centric AI effective is something humans can do better, bringing true empathy, shared experiences and real, face-to-face care.

We have experienced a sad literacy crisis. Our discomfort about death and loss creates a society where grieving people feel isolated and misunderstood. But these digitally sad companions provide us with a blueprint for change.

The question is: Will we be willing to learn from them?

Dr. Melissa Lunardini is Chief Clinical Officer of Help Texts, who oversees clinical voice, multilingual grief support worldwide through text messages.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button