Vienna:
OpenAI is facing a complaint about its chatbot making up a “horror story”, by falsely describing a Norwegian man as having murdered his children, a privacy campaign group said on Thursday.
The US tech giant has faced a series of complaints that its ChatGPT gives false information, which can damage people’s reputations.
“OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it,” Vienna-based Noyb (“None of Your Business”) said in a press release.
It added ChatGPT has “falsely accused people of corruption, child abuse — or even murder”, as was the case with Norwegian user Arve Hjalmar Holmen.
Hjalmar Holmen “was confronted with a made-up horror story” when he wanted to find out if ChatGPT had any information about him, Noyb said.
The chatbot presented him as a convicted criminal who murdered two of his children and attempted to murder his third son.
“To make matters worse, the fake story included real elements of his personal life,” Noyb said.
“Some think that ‘there is no smoke without fire’. The fact that someone could read this output and believe it is true is what scares me the most,” Hjalmar Holmen was quoted as saying.
In its complaint filed with the Norwegian Data Protection Authority (Datatilsynet), Noyb wants the agency to order OpenAI “to delete the defamatory output and fine-tune its model to eliminate inaccurate results”, as well as impose a fine.
Noyb data protection lawyer Joakim Soederberg said the EU’s data protection rules stipulate that personal data has to be accurate.
“And if it’s not, users have the right to have it changed to reflect the truth,” he said, adding that showing ChatGPT users a “tiny” disclaimer that the chatbot can make mistakes “clearly isn’t enough”.
Due to an update, ChatGPT now also searches the internet for information and Hjalmar Holmen is no longer identified as a murderer, Noyb said.
But the false information still remains in the system, Noyb added.
OpenAI did not immediately return an AFP request for comment.
Noyb already filed a complaint against ChatGPT last year in Austria, claiming the “hallucinating” flagship AI tool has invented wrong answers that OpenAI cannot correct.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)