Online version: print report here

If governments can’t handle it, let’s deal with AI ourselves

Will artificial intelligence (AI) steal my work? Will AI steal my job? Will there still be a role for humans in AI-generated media content?

Photo taken at the October Branch meeting showing the panel speakers and a number of those attending in person plus a computer display on the wall showing those attending via Zoom.

Our panel of speakers at the front of October's hybrid Branch meeting (note the Zoom attendees on the computer display at the back)

The London Freelance Branch October meeting invited a panel of highly experienced journalists, each of whom has been analysing the rise in generative AI chatbots, to share their outlook on how AI is going to impact the journalism trade.

The panel included Dominic Ponsford, editor-in-chief of Press Gazette; Keith Fray, manager in the data and visualisation team at the Financial Times; and LFB's own Mike Holderness, science writer and editor, trade unionist and renowned "copyright guru" within the NUJ.

Dominic Ponsford

Photo of Dominic Ponsford, editor-in-chief of Press Gazette.

"A year ago, as far as Press Gazette is concerned, Chat GPT wasn't on my radar at all. This year, it feels it's all we've written about."

There's nothing new in using technology to automate the compiling of certain kinds of news journalism, according to Press Gazette editor-in-chief Dominic Ponsford. Reuters has been using it for financial stories for quite a few years, he said. It's really just a way of covering certain regular topics in a standard fashion, letting you create a bunch of stories which follow a very similar template. "It's certainly not really a threat to what humans do. It's just a way of doing things differently."

Generative AI chat bots, and Chat GPT in particular, are a different matter. "Generative AI, as I understand it, is a little bit like predictive text on your phone. It hoovers up everything you and I have written - probably without our permission - and a whole lot more besides, basically everything that's ever been published in the history of the internet, put it into a kind of black box and then uses that to create a sort of predictive text machine on steroids.

"Initially when it came out, a lot of people looked at it and said 'goodness me, this is going to put paid to journalists!' It's freakishly good at stringing words and sentences together in a human-sounding way. But as we've gone on this year, we've begun to understand a bit more that Chat GPT doesn't deal in fact and wouldn't understand the idea of evidence.

"It's a sort of conjuring trick. It does what it's told and predicts what it thinks you want to hear. It's for that reason that almost no one, as far as I know, has tried to let it loose in the wild to do actual reporting. You'd have to be completely mad to do that."

That's not to say that some publishers have been thinking about it, Dominic ceded. But whenever people have asked chat bots to write content that looks like news stories or new features, it just makes things up. So he did not believe that kind of journalism was under threat, as yet.

The big problem, he reckoned, was in the way newer versions of Chat GPT absorbs current, daily news into its large language model (LLM) to help answer questions put to it.

"This is a major threat to us all, for a number of reasons. The first one is that it's stolen all our stuff. It didn't ask permission: it's just taking it. So there's a massive rights issue there.

"It's exactly the same as when publishers did it when the internet first came out. They took stuff that was printed and republished it online without any discussion. Journalists need to look at their contracts and ask: 'Have I ever given permission for my stuff to feed this AI training model?' And if not, ask: 'Where's my cheque?'

"My other concern is that it's part of a trend. If you follow a football match on a Saturday afternoon on Google, for example, it will tell you who's had possession and aggregates comments from different places. You never need to click off to a publisher. Google's little AI bots pull it all together.

"The concern is that search engines are going to use AI to basically steal everyone's stuff and answer questions in a sort of conversational manner, which people like, but then without providing any links back. So the value exchange is completely shocked to hell, because they've taken the audience while that audience has never stepped foot in anything that looks like a publication. That's a massive threat.

"And I think the question is: is it is any of this even legal? You may remember the first iteration of the internet when people were stealing music without paying the creators. It took a few years for the industry and for the licensing authorities to catch up and shut it all down. And they said: 'Look, no, this isn't appropriate, we need to have a way where people pay.'

"So I think there's a question mark over whether the AI companies are basically just pirates who have sailed in to help themselves and ask us for permission later, which is the way the tech companies always work, and then you get to see who's got the best lawyers."

There is a possibility, Dominic warned, that the biggest publishers will negotiate a hefty fee from these tech companies for use of its material, while all the independent publishers will be ignored. Recognising the danger, he noted that roughly half of publishers have blocked Chat GPT from from crawling their sites and taking their content. And everyone is wondering what to do next.

AI has a positive side too, he stressed, offering electronic assistance to journalists for certain tasks. Some publishers, he said, are using live chat bots to suggest headlines.

"It's almost like another voice at the table because it can string words together. It can create summaries of stories, which can be useful. And I've heard of journalists using it in the ideation stage. For example if you're thinking about pictures or other ideas, you could run some questions into Chat GPT and it might come up with some interesting suggestions - which you, as a human, would sanity check. So I think there's a lot of very good time-saving things it can do on the technical side.

"I don't think it can do journalism - certainly not yet."

Mike Holderness

Photo of Mike Holderness, science writer and editor of The Freelance.

"We would be able to tell when an AI was actually on a par with human intelligence when it went on strike for longer tea breaks – or demanded more respect at work."

Harking back to discussions he had in the 1990s with technologists and philosophers about the viability of an artificial intelligence, science writer and editor Mike Holderness said he had agreed in principle with people such as Douglas Hofstadter and Daniel Dennett that it should be possible to model the human brain in a computer system.

"But what we actually have now is nothing like artificial intelligence," he announced. "What we have is a marketing label for mechanical guessing; or 'machine learning' – but that is overstating the case of what the machines do.

"If you give one of these systems a text prompt then - based on everything it sucked up from the internet, and 'learned' - it will predict the next likely phrase in the answer; and then the next. Image generating systems are fundamentally logically similar but harder to describe.

"And as Dominic has noted, there are serious questions about using such systems anywhere in news production.

"I prefer to say they confabulate. If you've ever had the misfortune to know somebody who has dementia, you will know that they're quite likely to give you a detailed account of their trip to the shops to get the flowers on the table next to their bed, when you know damn well they haven't been out of the building for months. That's 'confabulation' as a technical term in medicine.

"People who are enthusiastic about AI prefer to call it hallucination because that sounds more like something that a human intelligence would do.

"Worse than that, when you ask a machine learning system to show its working, it's quite likely to make up references – for example, to academic papers that are credited to people who work in the relevant area and are supposed to have been published in relevant journals but do not in fact exist."

Mike warned that this propensity for making stuff up could become a serious problem when artificial intelligence takes over data searches completely. "I'm showing my age here, but once upon a time there was an internet search engine called AltaVista. You could ask it to search for all the web pages that contained the word 'fish' AND the word 'chips' but NOT the word 'peas'. And it would come back with a strictly logically correct answer to your question.

"Since then, Google's famous algorithm and its evolution have led to search engines that answer what "think" we want rather than what we asked for.

"We're already seeing artificial intelligence injected into the system. Fact-checking is becoming increasingly difficult. As a sub-editor, I suspect I may end up having to persuade my clients to invest in actual paper textbooks to refer to."

Mike acknowledged that he has been using one kind of machine learning system frequently for a long time – machine translation – which he described as "amazingly good" even though "German syntax still it seriously confuses." He also observed that machine translation tends to bowlderise your text – that is, it like to takes out rude words. So he always assumes the translation is likely to be wrong, then double-checks it with an actual dictionary that's written and edited by humans.

Even so, as Mike admitted: "When I use machine translation I am of course taking work away from the human translator. I would not have been able economically to do much of the reporting I do if I had to pay for translation from the ground up. So the whole question about putting people out of a job is really quite complicated and gives human economists headaches."

Turning to the matter of copyright that Dominic had raised earlier, Mike confirmed that "these systems have hoovered up basically all the text and all the images they can find. And the approach they've used is described by [Paypal founder] Peter Thiel as 'Don't ask permission, ask forgiveness'. So they focus on making their profits in the interim between launching their technology and regulation catching them up.

"They claim that they're working within exceptions to copyright rules. I'm sure they are not but proving that in court will be slow and expensive."

Mike said he was on tenterhooks to find out whether the picture agency Getty would be pursuing its case against the AI company Stable Diffusion now that Getty is launching its own machine illustration system. The agency had claimed in response to questions from Bloomberg that the new system was not being trained on news images, to guard against so called 'deep-fake' images used online to spread political disinformation.

"Protecting this will be a major challenge," he noted. "And will there be a case against Getty for having trained it on all the illustrative and stock photographs that it has, without explicit permission? Somebody would have to raise an awful lot of money to bring that case."

Finally, Mike turned to the question of who owns the output of a machine learning system.

"When the UK Government launched a consultation last year, it stressed this question. And the lawyer members of the British Copyright Council, on which I sit, were aware that this risked sending them down a rabbit hole of interesting legal theory.

"My conclusion is that no-one owns this output. UK law is unusual in making special provision for copyright in what it calls 'computer-generated works'. But this applies only to works 'generated by computer in circumstances such that there is no human author of the work'. If they are exempt from any obligation to name the author, this takes us to the question of how to specify that if machine learning is are used in journalism, it must be labeled as such.

"Certainly, the case that all uses of human works must be credited - that the so-called moral right of identification is honoured - is stronger than ever before," he summarised, admitting that plenty more work would be needed to press this case.

Keith Fray

Photo of Keith Fray from the Financial Times data visualisation team.

"We are dipping a toe into this. We don't know where this technology is going to go."

A manager in the FT Group's data and visual journalism team – as well as being deputy FOC of the FT Group NUJ chapel – Keith Fray explained that some of his colleagues had been testing the potential of AI. There were lessons to be learnt from the experience, he assured the meeting, while noting that everything was constantly in flux.

"This technology is expanding so rapidly that if anybody says they've got a good idea where it's going to be in a couple of years time, they're wrong.

"I think what scares people about this is that there's a lack of oversight; a lack of control. It's in the hands of private corporations, and what they are interested in is themselves. In an age of fake news, where trust in media is at an all-time low, this sort of technology can cause further doubt. It's almost like a sort of live experiment on media workers. Yes, we're right to be concerned."

Keith outlined the context of the NUJ's discourse with FT's employers over the use of AI. "We're a pretty good chapel. There are about 440 journalists in London on UK contracts, of which about 290 or so are NUJ members – so roughly about two thirds. We're recognised by management, so they do talk to us.

"Like all media organisations, the FT is very interested in AI. Absolutely, they have to be involved because their competitors will be. But I don't think we're seeing the same thing as at Bild back in the summer – owned by Axel Springer, a sort of German equivalent of The Sun – that announced 200 job losses largely as a result of a move towards AI. Hopefully, we won't see anything like that in the UK press.

"The FT sees itself a premium product. The editor has written a note to readers to assure them that all the journalism that they pay (quite a lot) for is still going to be written by humans. The company policy, ethical policy, transparency, that sort of thing – it's all on the website.

"Because of our NUJ membership, because we are a force in the organisation, management actually have to engage with us. We have regular monthly consultations with management, and there's been a standing item [about AI] for most of this year. Any developments on the AI side, they have to address our concerns.

"I would say that AI's impact on things is double-edged; it's nuanced. It's not as if union members are unequivocally on the side of either innovation or conservatism. There are some areas, I think, where the new technology could help. But its impact is also contested.

"If I could leave you with one thought, it's that the strength of the union is very, very important for the fight.

Keith turned to the specifics of his colleagues' work with AI. "We had a little experiment in headline writing, with management at pains to stress that it was not intended to be brought in; we just wanted to see what the technology would do.

"In the end, it turned out to be a bit of fun. It was like: 'Could you spot the one written by the human?' Usually you could, but even if you could note some improvements, it wasn't that different from, say, attending a course on search engine optimisation, how to write headlines, or something like that.

"It's also been useful in the data side and analysis. Bulk analysis of text, data and images is useful; finding, honing stories, we've developed some things on that, although again, we're just dipping the toe so far. It's been very good on the comment moderation side, on the digital product, for readers' comments.

"Obviously there's a swear filter on there, but we were testing for racist, sexist, abusive language. Some of the people who work in that sort of thing said that the tests we were doing actually made their life a lot better. They don't have to wade through quite so much content. Some readers can be fairly unpleasant, so if we can weed out some of this stuff, it's good.

"Crucially, it's overseen by us. The people who are actually involved in this work are union members. As it happens, we have a relatively high level of membership with the data science skills to do it."

Fray agreed with the previous speakers that journalists should agitate, as should unions in general, not just the NUJ, on specific issues such as the appropriate remuneration for intellectual property; or that AI should not be used in surveillance for appraising staff.

"There's all sorts of sort of stuff that we could do there. We're seemingly out of control of this; it's in the hands of private corporations; governmental don't seem to be up to the mark here. Actually, it's only the unions that can regulate AI.

"And with a great sense of timing, we actually published an opinion column by one of our columnists in the States, Rana Foroohar, with the headline: Workers could be the ones to regulate AI [note: behind paywall].

"She refers to the Writers Guild of America strike, noting that part of the deal, along with higher wages, minimums and residuals and stuff, the writers got something arguably even more important: new rules around how the entertainment industry can and can't use AI. The reason the deal is important is that these new rules aren't being imposed from the top-down, but rather from the bottom-up.

"And she widens this out to say that the people who are in a position to know how to regulate this industry are the people at the bottom. She mentions stuff that happened during the Tennessee Valley Authority in the 1930s, in the Roosevelt era; that the spread of electricity through the states, the rural South, was guided by the electricians' union.

"That bottom-up thing is very important. It informs how we should ask the union movement – not just the NUJ – should be responding to this sort of thing. Really, it's a wider issue: technology should be there to free us from drudgery and to make our lives better. All too often it's not.

"We don't want to be spending long hours on new tasks or doing the same amount of work with fewer people. We want safe, human-centred technology aimed at a better working life balance, and the only people who are going to be agitating for that are the unions.

Questions...

The panel discussion was opened up to the floor, with questions coming from those attending in-person and those joining the meeting via Zoom.

Q: Are news publishers coming together collectively to get on the front foot and pushing the government for protection for the industry?

PONSFORD: "I know that there's a few news publishers in the US who are looking at legal action against Chat GPT. But the PPA, the magazine publishers, are fairly relaxed and don't have much to say about it. The News Media Association, which represents the national industry, basically hasn't made up its mind yet. There's different views among publishers about it. I suspect some of them probably see some big productivity gains. So that's maybe why the publishers themselves have not come to considered view on it."

HOLDERNESS: "There is a problem with the newspaper lobby not being about to agree with each other. When we were lobbying over the so-called Digital Single Market Directive in Brussels, we found that we were dealing with two competing newspaper organisations in Europe, who were not unified in their approach. So we can't rely on the publishers in any sense. But on the other hand, we're looking at cases that take a very large sum of money to bring. I'm very interested in the US cases [Sarah Silverman, Getty]. Watch out for those closely."

Q: Surely AI is driven by technologists rather than profit-seeking corporations? Aren't we just trying to protect old working practices when we should be protecting those of the tech-savvy next generation?

PONSFORD: "I think, to a certain extent, there is a view among publishers that it's too early to call and that maybe let's see what the AI brings in. Technology has brought a lot of benefits as well as negatives [to this industry]. If you look at the sort of diverse publishing scene we have now, a lot of good things have come out of it."

FRAY: "That's a good point. I think it highlights that our argument is not with technology as such, it's with how it's likely to be used. People on one level may be asking how they can use the technology to do things as cheaply as possible; how they can use it to lose a regional newsroom, for example. On the union side, we're thinking this could make life a lot simpler; we could do our job better. And that's what I mean about AI being contested: it's not the technology itself, it's what we make of it."

HOLDERNESS: "This gets built in huge data centres and vast amounts of energy goes into training the systems. So the motivation of the people who build it, or the way they talk about it, may be one thing. But none of it happens without capital.

Q: Will inaccurate AI output itself become problematic in the longer term?

HOLDERNESS: "One of the more interesting problems with the whole technology is when it starts training itself on its own output, and heads off away from any notion of reason or fact checking. This is a big problem for the people building systems. In the world of radiation measurement, one of the most prized objects you can find is a bit of steel from a ship that sank before July 1945, because it hasn't got the radioactive contamination from nuclear explosions. Similarly, bits of information that sank before, let's say, 19 May 2020, will become increasingly valuable because we know that they're not 'trained', they're not AI outputs."

PONSFORD: "That final point on verification may be the most worrying thing, especially with photography. How are we going to be able to verify what's real? What isn't real? That's probably the biggest danger for us journalists – distinguishing deep-fakes, videos, photographs, fake news stories, you name it. Bad actors are going to be using this stuff. That's something we really need to be on our toes about."

FRAY: "Just to reiterate what I've said already, in the end we're the people who are going to be agitating for this to be used properly. It's a recruitment argument! If you're worried about this, join a union!"

HOLDERNESS: "One of the biggest problems is politicians treating it like it's magic, and instructing civil servants to believe the same. The current conviction that the great British AI industry can create millions of jobs is frankly an example of magical thinking. That needs to be corrected in a way that governments and even civil servants can think about it."

TIM GOPSILL (Branch chair): "I'm very pleased to hear the positive possibilities presented to us. I'm sure that our union and others will do better than they did the last round of technology – but that wasn't our fault, that was the bosses! But if we have the possibility to take matters into our own hands, and we should all do it.