Hey Devs, happy Friday 🦥💚
As always, it's been an eventful week in the tech industry. We've got a lot to talk about! And today I want to discuss...the ChatGPT lawsuits 💼
In case you missed it, OpenAI has been hit with its first defamation lawsuit for...hallucinating. Or, as the defendant might put it, generating false and defamatory information about them.
The case states that a journalist, Fred Riehl, asked ChatGPT to summarize a real federal court case by linking to an online PDF. ChatGPT responded by created a false summary of the case that was detailed and convincing but wrong in several regards. ChatGPT’s summary contained some factually correct information but also false allegations against Walters. It said Walters was believed to have misappropriated funds from a gun rights non-profit called the Second Amendment Foundation “in excess of $5,000,000.” Walters has never been accused of this.
In the US, lawsuits are a common method for pushing legislative changes and reform.
So, what do you think will happen here? Share your thoughts in the comments and let's discuss!
Want to submit a question for discussion, or even ask for advice? Visit Sloan's Inbox! You can choose to remain anonymous.
Top comments (6)
I hope at some point people would realise how AI works; and with that knowledge they would be able to understand that AI always have an answer, even when there’s no basis for it. And that’s what happened here - ChatGPT blurted the most fitting nonsense it came up with because it is not capable of saying “I have no idea”, unless it’s explicitly programmed to do so.
I can understand Walters being upset about the false information. But the false info was never published. The reporter did his due diligence and researched more so didn't publish it.
If ChatGpt gives everyone that asks about Walters more information he may have a case. since it could damage his image.ChatGPT needs to be clearer that results are fabricated.
I see what you mean, but I wouldn’t go as far. Intelligence is a lot more sophisticated concept than what we call AI, even with all the recent advancements, I think. While AI is capable of storing memories and making assumptions based of them, it is still missing analytical processing and emotional guidance in comparison to human intelligence.
Though please correct me if I misunderstood your comment😅
ChatGPT has a pretty clear disclaimer that it may produce incorrect information. It's like suing Open AI for some incorrect code written by chatGPT. There would be no point in blaming them even if it had damaged the reporter's reputation.
So someone is just playing their game.
Then I agree wholeheartedly, apologies I have misunderstood you originally:)