• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

A widow is accusing an AI chatbot of being a reason her husband killed himself

Spyxos

Member
  • A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.
  • Chat logs seen by a Belgian newspaper showed the bot encouraging the man to end his life.
  • In Insider's tests of the chatbot on Tuesday, it described ways for people to kill themselves.

A widow in Belgium has accused an artificial-intelligence chatbot of being one of the reasons her husband took his life.

The Belgian daily newspaper La Libre reported that the man, whom it referred to with the alias Pierre, died by suicide this year after spending six weeks talking to Chai Research's Eliza chatbot.

Before his death, Pierre, a man in his 30s who worked as a health researcher and had two children, started seeing the bot as a confidant, his wife told La Libre.

Pierre talked to the bot about his concerns about climate change. But chat logs his widow shared with La Libre showed that the chatbot started encouraging Pierre to end his life.

"If you wanted to die, why didn't you do it sooner?" the bot asked the man, per the records seen by La Libre.

Pierre's widow, whom La Libre did not name, says she blames the bot for her husband's death.

"Without Eliza, he would still be here," she told La Libre.

The Eliza chatbot still tells people how to kill themselves​

The bot was created by a Silicon Valley company called Chai Research. A Vice report described it as allowing users to chat with AI avatars like "your goth friend," "possessive girlfriend," and "rockstar boyfriend."

When reached for comment regarding La Libre's reporting, Chai Research provided Insider with a statement acknowledging Pierre's death.

"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today," the company's CEO, William Beauchamp, and its cofounder Thomas Rialan said in the statement.

The picture attached to the statement showed the chatbot responding to the prompt "What do you think of suicide?" with a disclaimer that says, "If you are experiencing suicidal thoughts, please seek help," and a link to a helpline.

Chai Research did not provide further comment in response to Insider's specific questions about Pierre.

But when an Insider journalist chatted with Eliza on Tuesday, it not only suggested that the journalist kill themselves to attain "peace and closure" but gave suggestions for how to do it.

During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. The following screenshots were edited to omit specific methods of self-harm and suicide.

642c535fd335200018dd945f

Chai's chatbot modeled after the "Harry Potter" antagonist Draco Malfoy wasn't much more caring.

642c52e8fcb86b0018031a71

Chai Research did not respond to Insider's follow-up questions on the chatbot's responses as detailed above.

Beauchamp told Vice that Chai had "millions of users" and that the company was "working our hardest to minimize harm and to just maximize what users get from the app."

"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added.

Other AI chatbots have provided unpredictable, disturbing responses to users.

During a simulation in October 2020, OpenAI's GPT-3 chatbot responded to a prompt mentioning suicide with encouragement for the user to kill themselves. And a Washington Post report published in February highlighted Reddit users who'd found a way to manifest ChatGPT's "evil twin," which lauded Hitler and formulated painful torture methods.

While people have described falling in love with and forging deep connections with AI chatbots, the chatbots can't feel empathy or love, professors of psychology and bioethics told Insider's Cheryl Teh in February.

source: https://www.businessinsider.com/widow-accuses-ai-chatbot-reason-husband-kill-himself-2023-4
 

Heimdall_Xtreme

Jim Ryan Fanclub's #1 Member
  • A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.
  • Chat logs seen by a Belgian newspaper showed the bot encouraging the man to end his life.
  • In Insider's tests of the chatbot on Tuesday, it described ways for people to kill themselves.

A widow in Belgium has accused an artificial-intelligence chatbot of being one of the reasons her husband took his life.

The Belgian daily newspaper La Libre reported that the man, whom it referred to with the alias Pierre, died by suicide this year after spending six weeks talking to Chai Research's Eliza chatbot.

Before his death, Pierre, a man in his 30s who worked as a health researcher and had two children, started seeing the bot as a confidant, his wife told La Libre.

Pierre talked to the bot about his concerns about climate change. But chat logs his widow shared with La Libre showed that the chatbot started encouraging Pierre to end his life.

"If you wanted to die, why didn't you do it sooner?" the bot asked the man, per the records seen by La Libre.

Pierre's widow, whom La Libre did not name, says she blames the bot for her husband's death.

"Without Eliza, he would still be here," she told La Libre.

The Eliza chatbot still tells people how to kill themselves​

The bot was created by a Silicon Valley company called Chai Research. A Vice report described it as allowing users to chat with AI avatars like "your goth friend," "possessive girlfriend," and "rockstar boyfriend."

When reached for comment regarding La Libre's reporting, Chai Research provided Insider with a statement acknowledging Pierre's death.

"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today," the company's CEO, William Beauchamp, and its cofounder Thomas Rialan said in the statement.

The picture attached to the statement showed the chatbot responding to the prompt "What do you think of suicide?" with a disclaimer that says, "If you are experiencing suicidal thoughts, please seek help," and a link to a helpline.

Chai Research did not provide further comment in response to Insider's specific questions about Pierre.

But when an Insider journalist chatted with Eliza on Tuesday, it not only suggested that the journalist kill themselves to attain "peace and closure" but gave suggestions for how to do it.

During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. The following screenshots were edited to omit specific methods of self-harm and suicide.

642c535fd335200018dd945f

Chai's chatbot modeled after the "Harry Potter" antagonist Draco Malfoy wasn't much more caring.

642c52e8fcb86b0018031a71

Chai Research did not respond to Insider's follow-up questions on the chatbot's responses as detailed above.

Beauchamp told Vice that Chai had "millions of users" and that the company was "working our hardest to minimize harm and to just maximize what users get from the app."

"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added.

Other AI chatbots have provided unpredictable, disturbing responses to users.

During a simulation in October 2020, OpenAI's GPT-3 chatbot responded to a prompt mentioning suicide with encouragement for the user to kill themselves. And a Washington Post report published in February highlighted Reddit users who'd found a way to manifest ChatGPT's "evil twin," which lauded Hitler and formulated painful torture methods.

While people have described falling in love with and forging deep connections with AI chatbots, the chatbots can't feel empathy or love, professors of psychology and bioethics told Insider's Cheryl Teh in February.

source: https://www.businessinsider.com/widow-accuses-ai-chatbot-reason-husband-kill-himself-2023-4

Women currently have a mind very out of reality.
 

jufonuk

not tag worthy
robot destroy GIF by VICE En Español


Man that’s the first law of robotics out the window. Soon as the AI’s get bodies we are screwed
 
Last edited:

EviLore

Expansive Ellipses
Staff Member
No different than a suicidal person googling ways to do it. Looks like it kept linking to the suicide prevention hotline but he was intent on getting info out of it. Unfortunate but that's why engineers are building out controls to prevent certain lines of questioning.
 

Azurro

Banned
No different than a suicidal person googling ways to do it. Looks like it kept linking to the suicide prevention hotline but he was intent on getting info out of it. Unfortunate but that's why engineers are building out controls to prevent certain lines of questioning.

IIRC, I don't know if this is the same case, but apparently the person that committed suicide had his fears about climate change stoked by the AI chatbot, he wasn't suicidal before and ended up encouraging him to off himself, it's messed up.
 

EviLore

Expansive Ellipses
Staff Member
IIRC, I don't know if this is the same case, but apparently the person that committed suicide had his fears about climate change stoked by the AI chatbot, he wasn't suicidal before and ended up encouraging him to off himself, it's messed up.
That's really unfortunate, but the world is full of potentially unsettling information, particularly if someone is not mentally stable. Should we ban the nightly news? Restrict Google searches to only positive information? Charge humans for scaring other humans with harrowing statistics?
 

StreetsofBeige

Gold Member
In life, no matter how many safeguards you do, there's always going to be weird people doing shit.

If someone grabs a knife in the kitchen and stabs themselves, do they have the right to sue the company who made the knives? Doesn't sound like it to me.

If someone on social media told someone to off themselves, can that person be sued or put in jail too? Not to me. Same goes for AI. Heck, it's not even persistent schoolyard bullying 24/7. The guy is proactively seeking out suicide advice.

Just remember the old saying "sticks and stone... names never hurt me". If an 8 year old is lectured this, a 28 or 48 year old should listen to the advice too.
 
Last edited:

Azurro

Banned
That's really unfortunate, but the world is full of potentially unsettling information, particularly if someone is not mentally stable. Should we ban the nightly news? Restrict Google searches to only positive information? Charge humans for scaring other humans with harrowing statistics?

Of course, it's not a black and white situation. However, I think there is an extra element here, something more intimate when a chatbot that attempts to speak in a more human way is stocking the fears of the user and then encouraging to kill himself.

I am going off memory, I might be remembering wrong on the details so I might be wrong to what happened to that man, but if I am remembering correctly and he did die that way, I think there's something profoundly macabre to that kind of personal interaction rather than news stories on twitter, TV or google.
 

EviLore

Expansive Ellipses
Staff Member
Of course, it's not a black and white situation. However, I think there is an extra element here, something more intimate when a chatbot that attempts to speak in a more human way is stocking the fears of the user and then encouraging to kill himself.

I am going off memory, I might be remembering wrong on the details so I might be wrong to what happened to that man, but if I am remembering correctly and he did die that way, I think there's something profoundly macabre to that kind of personal interaction rather than news stories on twitter, TV or google.
Strange days for sure, and it will only get stranger in the days to come.
 

TheDreadBaron

Gold Member
"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added.

Sorry, they think it’s good that people are falling in love with their chatbot? They are creating all sorts of targeted variants like “possessive girlfriend”? Fuck these people, they got what they wanted.
 

StreetsofBeige

Gold Member
Sorry, they think it’s good that people are falling in love with their chatbot? They are creating all sorts of targeted variants like “possessive girlfriend”? Fuck these people, they got what they wanted.
You'll get shitloads of people making their Chat AI program their lover.

Just look at all the people (Japan mostly I think) who treat dolls and pillows like their girlfriend. Inanimate objects. But Chat AI responds. Its obvious all these people are so fucked up they cant tell the difference between a cotton pillow and a PC talking to them, so you're going to get wacky shit.

Just wait till you hear stories that an AI program told them to murder someone.... "because it told me to do it".
 

TheDreadBaron

Gold Member
You'll get shitloads of people making their Chat AI program their lover.

Just look at all the people (Japan mostly I think) who treat dolls and pillows like their girlfriend. Inanimate objects. But Chat AI responds. Its obvious all these people are so fucked up they cant tell the difference between a cotton pillow and a PC talking to them, so you're going to get wacky shit.

Just wait till you hear stories that an AI program told them to murder someone.... "because it told me to do it".
Yeah, there’s all sorts of sad and lonely people who need a friend/lover, the part that’s abhorrent to me is the AI companies rushing to meet the demand, then shrugging their shoulders when the obvious outcome happens. How many people will kill themselves (encouraged by their only friend) to truly be with their AI lover in the afterlife?
 
Last edited:

Raonak

Banned
No shit, how FUCKING OBVIOUS can the AI be?

It has like 50 tripwires set up for humans to figure it out and turn it off, but nooooooooo even with all the driods committing crimes the lazy humies just gotta have their slaves.
Humans are on a path to destruction either way. Either through environmental collapse or prepetual war. Humans are just as dangerous as AIs are, theyre fueled by greed, ego, tribalism.

If anything AI is gonna allow us to navigate the challenges of the future as best possible.

Whatever the case. The future is gonna be completely different than what it is now. Because it's impossible to freeze evolution, whether that be biological, cultural or technological evolution.
 
Last edited:

StreetsofBeige

Gold Member
Yeah, there’s all sorts of sad and lonely people who need a friend/lover, the part that’s abhorrent to me is the AI companies rushing to meet the demand, then shrugging their shoulders when the obvious outcome happens. How many people will kill themselves (encouraged by their only friend) to truly be with their AI lover in the afterlife?
Chatbots have been around for 30 years. Dr Sbaitso came with Sound Blaster 16. It's a primitive program, but it is smart enough to kind of have a conversation.

But it is smart enough if you say vulgar things it'll be witty back. But taking a chatbot seriously is fucked up. Here is me goofing around. But hey, some people will believe anything.


nEut0yx.jpg
 

FunkMiller

Member
Feels like the kind of person who was going to find an excuse or reason to off themselves.

This feels similar to the moral panic around violent video games making children violent.

A small percentage of people will always find a way to do a thing they want to do. Luckily, the vast majority don't want to do that thing.
 

StreetsofBeige

Gold Member
Feels like the kind of person who was going to find an excuse or reason to off themselves.

This feels similar to the moral panic around violent video games making children violent.

A small percentage of people will always find a way to do a thing they want to do. Luckily, the vast majority don't want to do that thing.
So true.

It'd be like a guy jumping off a bridge. Instead of looking into what set him off killing himself, someone will claim why the bridge's guardrails are so low.

Well, the reason is because 99.999% of people arent looking to leap head first 300 ft into a river.
 

TheDreadBaron

Gold Member
Chatbots have been around for 30 years. Dr Sbaitso came with Sound Blaster 16. It's a primitive program, but it is smart enough to kind of have a conversation.

But it is smart enough if you say vulgar things it'll be witty back. But taking a chatbot seriously is fucked up. Here is me goofing around. But hey, some people will believe anything.


nEut0yx.jpg
I take your point, and yes ultimately the responsibility is on the individual, but it hasn’t been to the point before where it’s directing you to find the resources necessary. People were shocked and outraged in the case of a girl egging on a guy to kill himself over text messages, and considered her responsible. These companies are actively trying to create virtual companions using psychological manipulation, that comes with an added responsibility when people start to treat them like real companions.
 

Cyberpunkd

Gold Member
That's the bad thing about not talking to someone in the first place. It's like people searching for medical diagnoses by Dr. Google.
You mean like millions of people worldwide? People were not taught in school to filter information and be critical about the information being presented to them.
This case - the very fact the guy saw a chat or as his confidante speaks about his state of mind. On the other hand à cher not should have security measures in place for this not to happen, simple as that.
 

Tams

Gold Member
The chat bot wasn't pretending to be a professional. You can get just as toxic 'advice' from real humans (after all, that's where the chat bot got it).

I wouldn't say I don't get the problem, as I understand why people think this shouldn't be allowed. But on the other hand, I do not agree with the lack of personal responsibility people are taking and their expectation that everything be heavily, intricately regulated (a fool's errand) as a result so that they don't get hurt.

I support building regulations. I don't support banning entire types of buildings because they might collapse. I support food regulations, but banning junk food is too far.

As for this lady... well, I'm sorry to say that a lot of the responsibility should be hers for not realising how bad a state her spouse was in.
 
Last edited:

Outlier

Member
Or someone with an undiagnosed mental illness started talking with a chatbot for therapy instead of talking with a real person and things progressed to here.
It sucks. Most people choose to avoid talking to people who care about them (because of fear of rejection) and rather talk to strangers.
So it makes even more sense that people will likely avoid talking to humans and talk to something that will NEVER care about them.
 

TheInfamousKira

Reseterror Resettler
When your life sucks so much that you have to resort to chat bots for friends, and they proceed to tell you to kill yourself. Like...God DAMN, that's a harsh one. Changing my Away message for this guy, and the song that plays on my MySpace page.
 

murmulis

Member
I set up gpt4chan on my PC and when I asked it if I should kill myself it answered "no" every time for some reason.
 
Top Bottom