A call to action on ‘artificial intelligence’
The following statement was adopted by the Executive Committee of the Internatinal Federation of Journalists (IFJ, meeting in London on 21-22 June 2024.
A robot writing computer code, generated from that prompt by Dall-e-2
JOURNALISTS are experiencing the initial tremors of a coming artificial intelligence earthquake. It is reshaping our industry more profoundly than the digital revolution of recent decades. Journalists have always been journalism’s ablest defenders. When we act collectively we do this best – through our trades unions, able to consider issues in a democracy of practitioners, and thereby engage with government and employers as necessary.
“Artificial intelligence” describes a broad range of processes that have the capacity to impact all workers. The consequences of this technology for journalists in particular will be profound. Journalists have a deep personal responsibility to ensure that their work is wholly ethical and complies with the IFJ’s Global Charter for Ethics for Journalists.
Unions occupy a critical and unique position to facilitate the harnessing, economic framing and regulation of this emergent potential. They can ensure that AI is not used unless it serves the creation of dispassionate, ethically- produced news, for the benefit of humanity and consistent with IFJ’s Global Charter for Ethics for Journalists. What follows are ideals intended as a starting point for negotiation.
Journalists as defenders of journalism
Generative AI has the capacity to significantly reduce the worker hours necessary to produce news, particularly for the more routine stories. Most newsrooms, however, are already under-resourced. News production benefits from a multiplicity of diverse perspectives, for both accuracy and enrichment.
Where savings in worker hours can be achieved, the assumption should be that this resource will be redeployed, in person or through resource allocation, to the production of fresh information. In many instances it is valuable for those producing news to spend their working lives in close proximity to those upon whom they are reporting. AI should not be used to allow media workers to become any more remote from the communities whose lives are their raw material. The professional media should be a standard-bearer for quality expression, accuracy and information presentation. No reduction in this quality should be acceptable, simply because that is the standard that AI achieves.
Generative AI language models contain no information about “truth”, only about what occurs frequently in the input material. Such models cannot engage in fact-checking or assess the weight and credibility of sources. They cannot seek out new sources or balanced perspectives. There are many examples of such models producing material that is wholly inaccurate. Generative AI cannot replace human journalists, and its output should not be considered “journalism”, save where it has been subject to appropriate human oversight and checking.
News platforms have an inconsistent record of applying new technology to improving their products, rather than simply reducing costs. In issuing the cautions contained in this text, journalists also commit themselves to finding ways to deploy AI to make their news more accurate, complete, compelling and relevant.
AI tools are not available in many parts of the world. Journalists have a responsibility to highlight this and pressure tech platforms to make provision universal.
Generative AI outputs tend to reproduce the biases and myths that are most common in the input materials, in particular on gender equality and the promotion of minorities. All news platforms must adopt systematic methods to ensure that such biases are not reflected in published or broadcast output. International regulation of AI is required. Where this is considered, journalists should be represented, either by their own unions or by the IFJ.
Responsibilities as custodians of information production
AI has the capacity to create falsehoods so compelling and prolific they are sufficient to overwhelm our entire information ecosystem. Videos and photographs can be created in seconds appearing to prove that fabricated events occurred, and compendious “supporting” articles can be created. Unions should seek voluntary agreements with news platforms to ensure that any published or broadcast works that are wholly or largely the product of AI, are clearly labelled.
All published works that purport to be journalism must be the ultimate responsibility of a suitably qualified and/or experienced journalist, who should use their best professional endeavours to ensure that all works are appropriately credited. This should be done in a manner consistent with the IFJ’s Global Charter for Ethics For Journalists. To support this, journalists’ rights to be identified as authors of their works need to be strengthened, respected and enforced.
The means by which AI generates its output – the machine consumption, absorption and regurgitation of material created by others – risks undermining the economic benefits that creators should reasonably be allowed to enjoy from their work. All use for machine learning should be by prior agreement with authors, and where news platforms agree licensing agreements with AI companies it should be with the consent of journalists and should include their compensation.
All journalists – including staff journalists, freelances and independent and self-published journalists – should be entitled to organise and to bargain collectively with AI companies with respect to the terms for use of their work in generative AI language models and outputs. Those terms should include compensation for ingestion and use that has already occurred, and a fair share of this compensation should be paid to actual journalists, whether they are working as staff, as freelances or under “work-for-hire” terms.
Journalists’ rights as workers
All processes involving workers as employees, freelances or contractors that deploy AI must be transparent. Whatever algorithms and processes are applied should be open to inspection upon request, and a conventional human alternative option available when requested.
AI has the capacity to reduce contact between workers, both in ordinary daily interaction, and in formal processes such as team meetings, interviews, and appraisals. Valuable efficiencies may arise. However the benefits of human contact should be intrinsic to all workplace planning and should be a requirement upon all employers. Where unplanned, informal contact declines, planned social encounters should be a requirement of work organisation. Journalists’ wishes to work remotely should also be respected. Training to use new technology should be available to all.
Where AI is used in recruitment, evaluation, or assessment, workers should be consulted about the process, informed of how AI is deployed, and their consent required before deployment. This process should involve workers’ elected representatives or trades unions, as well as individuals. The capacity for reconsideration or appeal to humans must be retained.
A fast, flexible response
The best regulation of the ownership of the value created by journalists comes from agreements between news platforms and journalists’ trades unions. Such agreements have the advantage of speed, precise definition, and shared objectives. As such, they should be encouraged as the easiest and quickest means to regulate AI in news production.
AI development and application is currently unregulated. Given the power of the systems that are developing, wholly in the power of corporations, the dangers from this are immense. For this reason, AI should be brought under robust international regulation. Only an international response is sufficient, as this technology and its products know no boundaries.
Where unions are negotiating for freelance contributors, ingestion for the purpose of AI “training” should be on the basis of consent and compensation.
