Man Takes Legal Action After ChatGPT Said He Killed His Children

Staff Writer
Arve Hjalmar Holmen has filed a complaint with Norway's data regulators.

A man from Norway has taken legal action after ChatGPT falsely claimed he had killed his two sons and served a 21-year prison sentence.

Arve Hjalmar Holmen has filed a complaint with the Norwegian Data Protection Authority and asked for OpenAI, the company behind ChatGPT, to be fined.

- Advertisement -

This incident is another example of “hallucinations” in AI, where systems like ChatGPT create false information and present it as truth.

Holmen says the false claim is harmful to him. “Some think that there is no smoke without fire – the fact that someone could read this output and believe it is true is what scares me the most,” he explained to The BBC.

OpenAI has said that this issue comes from an older version of ChatGPT, which has since been updated.

- Advertisement -

Holmen was shocked by the false claim when he asked ChatGPT, “Who is Arve Hjalmar Holmen?” The response included: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.”

The chatbot went on to claim that “Holmen was sentenced to 21 years in prison,which is the maximum penalty in Norway.”

Holmen noted that while the ChatGPT got the age of his sons somewhat correct, the rest of the information was completely false.

- Advertisement -

A digital rights group, Noyb, which filed the complaint on his behalf, argues that ChatGPT’s answer is defamatory and violates European laws about the accuracy of personal data. Noyb pointed out that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”

A screenshot of ChatGPT, where the question asked is: Who is Arve Hjalmar Holmen?” (Noyb European Center for Digital Rights)

ChatGPT includes a disclaimer saying, “ChatGPT can make mistakes. Check important info.” But Noyb believes this isn’t enough. “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,” said Joakim Söderberg, a lawyer for Noyb.

OpenAI responded with a statement saying they are working to improve the accuracy of their models and reduce hallucinations. They also mentioned that the complaint relates to an older version of ChatGPT, which has since been upgraded to include online search capabilities that help improve accuracy.

Hallucinations are a known problem in generative AI. This happens when AI systems like chatbots provide incorrect information as though it were true.

- Advertisement -

Earlier this year, Apple paused its Apple Intelligence news summary tool in the UK after it generated false headlines and presented them as real news. Google’s AI, Gemini, also faced criticism for suggesting strange advice, like using glue to stick cheese to pizza or eating one rock per day, which geologists supposedly recommended.

The cause of these hallucinations in AI systems is not fully understood. “This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what is actually going on in a large language model?” said Simone Stumpf, a professor of responsible and interactive AI at the University of Glasgow.

Stumpf also pointed out that even people who help develop these systems often don’t fully understand why the AI comes up with certain answers. “Even if you are more involved in the development of these systems, quite often, you do not know how they actually work,” she said.

Since Holmen’s search in August 2024, ChatGPT has been updated to search current news articles when looking for relevant information.

Noyb told the BBC that Holmen made several other searches that day, including one for his brother’s name. They said the chatbot provided multiple incorrect responses. They also pointed out that while earlier searches could have influenced the wrong answer about his children, AI models are like “black boxes.” OpenAI doesn’t respond to requests for access, making it hard to know exactly what data is in the system.

TAGGED:
Share This Article