Keeping public trust and our sanity

HARRIET MEYER told the October London Freelance Branch meeting about her workshops teaching how to apply “artificial intelligence” – AI – ethically and effectively. Harriet regularly runs workshops for Women in Journalism (WIJ ) and has written about finance for most of her career.
She asked: “So how many of you are using AI at all in your journalism? A few?”
She said: “I started in journalism back in 2002, when it was all about the newspaper and online journalism didn’t even exist. Social media didn’t exist, and the online version of the paper was in the background. So I’ve seen a lot of change in the industry.”
When AI landed, when ChatGPT landed, “I was on an editorial team, and I was tasked with learning about AI and how it could help us do our jobs. Having seen the digital revolution, moving from papers to online, I had a sense of the need to get on top of this, because it was going to change our industry. So, I started putting three to four hours every day into learning about AI.”
Harriet was recently named by LinkedIn as one of the top 12 “voices” on AI in Europe to follow. and offers a newsletter for media professionals, aimed at overcoming ‘AI’ hype, cutting through the jargon – what, she told us, “I’ve done for my entire career as a financial journalist”.
”My aim is to really empower you because we’re up against the tech giants and the fear that comes with this, which I’ve felt very much as well, so I train news content and PR teams on how to use AI. It’s not about relying on AI to do your job; it’s about how it can help you enhance what you already do and not take away your skills.”
Every journalist faces a choice: “you can ignore AI, which I feel people are doing at the moment, and risk irrelevance in your career; or you can engage with it critically, using a journalist’s critical mindset.”
Mind the pitfalls
“There are so many pitfalls and ethical questions and around this technology,” Harriet warned. “You can’t use it without being aware of these.
“First, there’s accuracy. AI models are renowned for their ‘hallucinations’. About a year ago they made stuff up about 15 per cent to 20 per cent of the time. “That is very dangerous for journalism. I think it’s getting better as the models improve,” Harriet said.
“But you have to verify the facts. This is where you bring your judgement to it. You really must be aware of those ‘hallucinations’ – and of bias. These AI models are trained on vast amounts of data, including reddit.com comments and Facebook posts. The training data reflects human biases. So, it’s full of misinformation. If journalists use AI for first drafts of articles, for me that is quite terrifying.”
Harriet has ”been freelance for a lot of my career and I feel strongly that freelances are left out in the cold when it comes to policies, like how are we meant to be using these tools? What’s allowed? I haven’t been sent an AI policy by anyone I work for. We need to push for them.
“Be aware that when you’re using a free AI tool, you are effectively paying them with the data you’re giving them. So don’t give them anything sensitive: no confidential documents or briefs that you wouldn’t be happy to be in the public domain. You can tell ChatGPT not to use your data for training. But I would say: do not put anything into a free AI model that you wouldn’t want to come out somewhere else.”
The Guardian and some of the big news agencies have “enterprise models” – their own models that are “ring fenced” with “guard rails. “As a freelance you have to create your own guard rails [by] not putting in sensitive data.”
Everything’s been stolen
Copyright is obviously a massive issue, particularly for journalists. The New York Times case against OpenAI rumbles on. “The general consensus is that the papers will win these cases,” Harriet said.
“Everything’s been stolen from authors, photographers, artists, graphic designers, everyone.” The companies claim that they can use this material to train their models under so-called “fair use” [in US law].
Then there are issues like AI-generated actors or influencers and images that look very real even to a trained journalist eye. “We need to question what we see, what we hear, what we read and to be sceptical. All this is based on stolen material that is essentially regurgitated in a different form – so there are a lot of ethical debates around this.”
“As I said, the New York Times is likely to win that court case. Then there are deals being struck. Perplexity AI, for example, said recently that it was setting aside $42 million to pay publishers when articles appeared in the model’s search results.
“There are so many battles that we have to fight. But I remain optimistic: truth-telling is fundamental. Hopefully the big publishing houses will be able to get financial backing from the tech giants.”
But how can you start to actually use AI tools in a way that you might feel comfortable with? “Take hallucinations. In my courses I used to train journalists on really strong ‘prompting techniques’ to try to get rid of the hallucinations. That is, how to talk to them in a way that means they are much less likely to make stuff up. The big ‘large language models’ – ChatGPT, Gemini,Claude and Copilot – are very similar.”
There is, for example, Harrief informed us, “something called ‘retrieval augmented generation’ – essentially you tell an AI to go first to information you give it. You might give it an interview or a table of data, to give it a context to work in, before it goes searching for stuff and potentially making stuff up.”
You may have seen stories around fake news and fake “experts”. What to watch out for? “It’s about education, so people can spot the signs of fake news and do their due diligence around who these ‘people’ are.”
Trying it out
“If you haven’t experimented already, I would definitely have a go with Perplexity. As a journalist, you’re often trying to write about a topic that maybe you’re not already an expert in, or you’re having to find out a lot of information very quickly. I see Perplexity as like Google on steroids.
“You could use Google Notebook LM to brainstorm. Ask it, ‘what are the themes across all these documents?” What are the five most common issues that people are having with this particular topic?’
“In ChatGPT try ‘deep research’ – click the plus button find it in the drop-down menu. Send it off and to do its search, make a cup of coffee, come back to a big research report.
“Frankly” Harriet said, “the tech giants are not going anywhere, so let’s get on board with trying to use AI in a way that helps. You see where your expertise is. These tools are not going to write the article for you. They become your intelligent research assistant, to pull out threads that you might have missed.
“As a freelance you work solo a lot and that can be lonely. But I have this ‘assistant’ that can pick holes, give me another opinion, or comment as a certain kind of reader. It can also help with re-working a story into other formats – a video or a social [media post].
“The future of journalism is human-led and AI-assisted. And using our scepticism to steer how we use it. It’s not to stop it: we’re not going to get very far with that. But it’s about keeping the public trust and our sanity in the process.”
Questions
Freelance editor Mike Holderness noted: “I’ve been talking to researchers and philosophers about the possibility of AI for over 30 years. This isn’t it, which is why I insist on calling what we are discussing ‘machine learning’. What it’s doing is ‘confabulating’ – like someone with dementia – even when it gets the answer right. As the Ancient Greeks put it, knowledge is ‘justified true belief’. It can be true. It might be belief, but it’s not justified.”
Harriet concurred that AI is “very good at faking, at sounding human, but it’s not”.
Mike continued: “And one thing that frightens me is so-called AI replacing search. So, I get the output from it, and use a search engine to help me check it. But if the search engine is a machine learning system it will be confabulating...
“There’s a big gap in the market for an old-school search engine that just shows me the articles with these words in them and no more.”
Harriet noted that “Google is really coming out fighting, and I think it’s kind of panicking. It’s got massive competition now.”
Grace Livingstone asked: if you’re doing an investigative report or on-the-ground reporting and bringing in voices that haven’t been heard before, would you say it would be dangerous to put your thoughts into an AI? Would you just be putting your sources into the public domain?
Harriet said that if you’re using a free version, be really careful about how you use it. Technically what you put in could be spat out.
“I was writing a story for the Sunday Times about care costs, trying to get to grips with [the topic] very quickly. I asked AI: ‘what’s changed? What are the issues that people are talking about online?’ I’m using it to enhance my ability to put together an awful lot of information very quickly under time pressure – I’m also using it to pick holes and to ask where I might find case studies.”
- The Freelance recommends “ChatGPT’s self-serving optimism” by Vauhini Vara in the Atlantic: standfirst “OpenAI’s new guidelines ask its chatbot to celebrate ‘innovation’, contradicting its stated goal of objectivity – and raising questions about what objectivity even means.” It's not philosophically deep, but a useful glimpse of how “AI” can focus attention on the difficulty of saying what it means for anyone (or anything) to be “objective”.
'Artificial Intelligence' our coverage to date
![[Freelance]](../gif/fl3H.png)