Online only, so far

Who gets sued when a chatbot defames somebody?

YOU HAVE WRITTEN a story that contains some claims that turn out to be inaccurate. A letter from libel lawyers inevitably turns up. You sprint to apologise and set the record straight, hopefully minimising the damage and avoid an expensive financial claim.

A chatbot writes a story that contains an inaccurate claim - a claim that is "likely to cause serious harm to the reputation" of a person or company, in the words of the UK Defamation Act 2013. When queried on these errors, the chatbot invents more falsehoods that compound the defamation.

Can the aggrieved party do anything if this is published? Yes. Do keep reading.

Can you do anything against the chatbot? Is there anybody for lawyers to pursue? You can't take a software program to court, nor put a bunch of electrons in the dock.

At least not yet.

Lawyers are, however, ever-inventive in tying to stand up their belief that there must be someone to sue. They are looking hard, for example, at who gets to be sued when a self-driving car fails to exercise due care and attention.

Plenty has been written in recent months about artificial intelligence (AI)-based chatbots such as ChatGPT, but most of this has been about the looming threat it presents to certain creative writing jobs. There are concerns too about plagiarism in the education sector and copyright theft of words and images used to "train" them. And, of course, they has been found to make... mistakes.

Is it so serious when they do? Some of the mistakes are risible; besides, the internet is full of errors, fake news and April Fools. But what if the consequences are serious?

Dark

It's darkly amusing when ChatGPT tells you that you are dead and tries to prove it by inventing a non-working web link to a non-existent Guardian obituary. It's not so funny when it tells your colleagues that you are a sex offender and cites as evidence articles (again, non-existent) in three major newspapers.

Precisely why ChatGPT "hallucinates" or confabulates in this way, not just delivering random information that isn't true but doubling down when challenged and generating "proof" in the form of authentic-looking web links that actually lead nowhere, remains a mystery for AI developers - or philosophers - to solve.

Given the job losses currently being experienced in IT industries - yes, including AI - an answer may remain out of our reach for a while yet. Those still working at the AI coalface complain that their employers are putting them under immense pressure to bring products online, even in an unfinished state. Being first to market will always trump all other considerations. Humanitarian collateral damage? Well, that's what the disclaimer is for.

In the UK, defamation law is supposed to be clear on one thing: if someone believes they have been defamed in a publication, they can try to sue that publication. Unless they are particularly small-minded, or are seeking to intimidate investigative journalists, there is nothing to be gained by targeting a writer directly.

ChatGPT is not a person that you can intimidate, nor is it - at least, not yet that we know of - writing articles for media publications that you can sue.

And remember that someone who is defamed can sue those who distribute and (if old-school) print a publication. UK defamation law doesn't much care about who or what did the typing: the 2013 Act barely mentions authors, but deals with "publication", which includes clicking "send".

Who or what does what?

The way chatbots work is that an operator asks the system a question and receives a convincing natural-language answer. So even if ChatGPT is telling bare-faced porkies to an operator - such stating falsely that a colleague has been convicted of sexual assault - a lawyer would have difficulty convincing a judge that the resulting harm - when the lie is read only by one person - would be serious enough to give it court time.

But if you were to publish ChatGPT's lie you would most certainly be liable to be sued. Again, the question of who or what wrote the libel would be philosophically interesting but legally irrelevant.

Those in positions of authority are beginning to sit up and take notice. Despite the private sector rush into AI adoption - primarily so that it can cut staffing - public sector and governmental organisations are taking a more cautious path.

At the end of March, the southern French city of Montpellier was among the fist to ban the use of ChatGPT by town hall employees across all its sites. The mayor's head-of-digital, Manu Reynaud, said everybody else seemed to be deliberately overlooking the technology's obvious potential for disaster, likening ChatGPT to the meteor in the film Don't Look Up.

Days later, the Italian government's privacy enforcement body imposed a "temporary limitation" on ChatGPT collecting and processing personal data of its citizens, citing violations of the General Data Protection Regulation GDPR) and Italy's own data protection code. The chatbot's developer, OpenAI, faces significant fines if it fails to address these issues immediately.

Canada's Office of the Privacy Commissioner is also investigating the system following complaints that it was "collecting, using and disclosing" personal information without consent. Other countries may yet be spooked into action, at least regarding the data.

And so to court

An Australian mayor, Brian Hood, finally stuck his neck out last week by playing the libel card. He said ChatGPT had falsely claimed that he'd previously been in prison for bribery. His lawyers are now trying to bring a defamation action against OpenAI under Australian law. Microsoft has invested at least $13 billion in OpenAI.

Hood was, the BBC reports, in fact a whistleblower about bribery at a company called Securency. It reports that journalists were "able to confirm Mr Hood's claims by asking the publicly available version of ChatGPT on OpenAI's website about the role he had in the Securency scandal" - so we appear to be looking at a case concerning the direct, unpublished output of the chatbot.

Commenting on this action in The Times, a US defamation lawyer reckoned that suing OpenAI would be a challenge, not least because legal systems are not up to speed with the technology. One devious twist in the interpretation of libel law could mean that a researcher who types in the initial question may be ruled equally responsible with whoever publishes the chatbot's defamatory response.

2 July 2023: Hood has not proceeded with court action.

Take care out there

Evidently, if you envision using ChatGPT to research facts for a story, you would be advised to triple-check everything it tells you against independent sources. The risk is not far-fetched: chatbot AI interactions are likely to replace conventional search engine results sooner than you might think. And trusting information presented by a persuasive and articulate AI chatbot would be as reliable as copying and pasting from Wikipedia in the early 2000s, or less so.

The problem will then be that false information quickly seeps out of chatbots and into the internet at large.

The next thing you know, a crowd carrying torches and pitchforks is breaking an innocent neighbour's windows and painting "DIE PAEDO" on his garage door. Who is responsible? Who should his lawyers target to seek redress?

Let it not be you.