AI: cause for hope or fear for the future?
Hardly a day goes by without artificial intelligence making headlines with claims of fresh advances, often accompanied by warnings of the unspecified dystopian future it might be heralding. Its progress is relentless, its potential ever expanding and its shortcomings ever more visible to those who have their eyes open.
It often seems that all around AI there is a battle raging where powerful commercial interests, political forces and national governments slug it out, contemptuously dismissive of those who might be innocent casualties – such as creatives watching the value of copyright brutally eroded – or who urge caution, even daring to utter the R word – regulation.
Many have concerns about where the AI revolution might lead and how it will change society but it often seems few can be really bothered to get to grips with its implications. It is important that we do, for we all have a stake in its development and the journey it could take society on.
That’s why I was pleased to be able to arrange a briefing on AI for the UK Section of the European Journalists last week. It turned out to be illuminating, worrying but also reassuring.

The discussion was led by Michael McNamara, an independent Irish MEP and co-chair of the European Parliament’s Working Group on the Implementation and Enforcement of the AI Act, and Graham Lovelace, strategist and consultant on AI, especially its impact on media. Both offered insights into how AI is developing and, crucially, how it might be regulated so that its potential benefits outweigh the downsides that many of its more blinkered advocates constantly gloss over.
• My detailed report of the meeting is available on the AEJ-UK website.
It was very wide-ranging but there are two key takeaways for me from it.
The first is the danger highlighted by Graham Lovelace of journalists and the media of lazily slipping into bad habits in their use of AI, especially using it to generate content. He highlighted several uses of AI that are already commonplace, such as using it to suggest headlines, crunch vast datasets to expose trends, transcribe long interviews or meetings, translate into multiple languages or generate the meta data needed for websites, the latter a dull task definitely better automated in his view.
But some are already sliding down a slope towards a more indiscriminate use of AI, using it to research stories, gradually placing too much faith in responses, and then perhaps letting it write the stories. It is happening. But even those using it to do this acknowledge that accuracy cannot be guaranteed. Lazily recycling content from AI will lead to the creation of “AI slop” where fake content is given greater validity by being repeated. Once journalists allow that to happen, trust will evaporate.
Where will this lead, he asked? It will dull the creative spark and the ability to think critically will decline.
The answers, he suggested, are simple but potentially elusive. Use AI at the end of a creative process, not at the beginning. And label it. Be honest with readers. And independently research, check and challenge every step of the way.
Transparency and honesty
This was also urged as part of the solution by Michael McNamara, who said that not identifying the use of AI is potentially dangerous for politicians as well as journalists. Asking AI to write a speech is every bit as dangerous as asking it to write a news story. Transparency and honesty about its use are essential.
He was also reassuring in his explanation of the robust measures the European Union is looking to put in place to ensure the responsible use of AI. He admitted that not all of this is perfect, citing the current stand-off over copyright as a messy compromise. This will be a crucial fight because what AI developers are doing at present in indiscriminately scraping the web for content is nothing short of theft, said Graham Lovelace.
On this and other threats from Ai to privacy, personal data and its ability to generate fake news, audio and video, it is clear the EU is determined to face up to the power of those big tech interests to whom any regulation is an anathema and put some sensible protections in place. He was hopeful that the UK might follow suit, praising the work being done in the House of Lords by Baroness Kidron.
Yes. AI is here to stay. Its use will grow and benefits will flow from that but it is far from perfect, which is why using it with caution is a prerequisite of responsible journalism – and being open and transparent about its use is essential.
• The main image has been generated by the Adobe Stock library using AI