AI and Copyright

Front pages of UK newspapers on 25 February 2025

No Prosecco was spilled in the writing of this post

Last week I was invited to attend a panel on AI and Copyright held by some of the UK’s leading experts in technology, film production and photography. The event coincided with the closure of the UK government’s consultation period on proposed changes to copyright law and AI, and I thought it might be helpful to share some of what was discussed – and what was not.

In short, the idea being proposed by the government is that Big Tech companies could be provided with an exemption to copyright law in order to train large language models (LLMs).

This is the (now ubiquitous) technology that turns simple prompts into images, music and other text in a process the government is referring to as ‘data mining’. There are now literally hundreds of LLMs based on this principle, with OpenAI’s ChatGPT being amongst the first to hit the headlines in 2018 and DeepSeek being amongst the most recent, just a few weeks ago.

To quote the consultation document:

‘[…] a model which generates images may be trained on billions of images associated with descriptive keywords. By analysing those images and learning patterns and associations, a model can produce convincing graphical output in response to a text prompt.’

Large datasets that can be used for training AI models often include material protected by copyright, but the Government feels that the UK’s current copyright legislation limits investment and innovation in this sector.

They propose that copyright law should be relaxed with respect to training LLM’s, and that a new automated system for recognising copyrighted material could be adopted to protect artists and creators.

All UK citizens had the right to submit their views for consideration – and apparently many thousands took the opportunity.

The end of the consultation period (25 February 2025) made the front page of every major British newspaper – and on the same day, over 1,000 of the world’s most famous musicians released a completely silent album in protest to the proposed changes to Britain’s copyright laws. It was called ‘Is this what you want?’.

For those of you who don’t work in the media, it might not immediately be clear what all the fuss is about.

I think it’s fair to say that many of my colleagues in the creative industries are falling into a state of deepening panic over the use of AI. Some are even downing tools in despair.

Why ? Is such a dramatic reaction proportionate or justifiable? I’m not hoping to make any great pronouncements here, just to provide some context so you can explore for yourself. When time permits I’ll add some links and further reading. Here goes….

“Both the creative industries and AI sectors are at the heart of our industrial strategy, and they are also increasingly interlinked. AI is already being used across the creative industries, from music and film production to publishing, architecture and design; it has transformed post-production, for instance. As of September 2024, more than 38% of creative industries businesses said that they have used AI technologies, with nearly 50% using AI to improve their business operations.” – Chris Bryant, The Minister for Creative Industries, Arts and Tourism 

In simple terms, those in favour of an exemption fear that if the UK retains its so-called Gold Standard on copyright – precluding Big Tech companies from training their models using copyrighted creative work without prior consent from the copyright holder – then Big Tech will go elsewhere and the UK will be commercially ‘left behind’.

Opponents of this view point to the idea that the UK could actually have a commercial advantage as a place where creative rights are uniquely respected.

If the UK were to retain its Gold Standard approach, the copyright holder would have the choice of allowing the training of the model with their material or refusing permission.

The government’s proposal is instead to create an ‘opt-out’ option for creators – effectively removing their work from the LLM’s training searches.

The Creative Rights in AI Coalition is a campaign group formed in December 2024 by organisations including the Society of Authors, the Publishers Association, the Association of Illustrators and the Association of Photographers – and there were various members represented on the panel that I attended last week.

While the Government’s proposed approach sounds reasonable on the surface, the Coalition says there are a few serious practical problems with an opt-out solution.

Firstly, the moment that a work of art, piece of music or creative writing is posted online, it resides in a multitude of virtual places – copies that are ‘downstream’ of the original work. These are not controlled by the creator of the work and copyright cannot be effectively enforced.

Secondly, they point out that no system of automatic content recognition currently exists and no mechanism for paying royalties to artists has been devised.

Thirdly, the panel were unanimous in feeling that training an LLM is the same thing as copying. One contributor pointed to clumsily altered photographs, created almost instantly, that the LLM claimed were done ‘in the style of’ the original photographer, who happened to be present on the panel.

Another expert panellist compared LLM training to audio or image compression – ‘training’ simply being a more acceptable term for making highly compressed copies of original works.

Perhaps it’s not surprising that artists and their representatives feel under siege. They genuinely believe their livelihoods are in jeopardy and if they can’t afford to keep creating new work, LLMs will only ever be able to generate impoverished imitations from an increasingly homogenous and limited gene-pool.

The question, from a purely factual rather than a compassionate standpoint, is whether this is actually the case. Can LLM’s actually create anything themselves – or are they simply highly sophisticated processors?

And to what extent are we, humans, also simply highly sophisticated processors? Didn’t all our artists learn by copying from the great masters? Idem per Idem, as my old Philosophy professor used to say.

AlphaGo’s Move 37, which we’ll come to in a moment, would suggest that synthetic creativity is already a reality… and has been for nearly a decade. It’s just that artists didn’t see the scale of the approaching threat until relatively recently, when LLM’s started showing up in their teenage kids’ bedrooms.

This is not the time or place for a primer in Aesthetics, but putting aside questions of copyright it was abundantly clear during the panel discussion that all the creatives in the room felt there’s something inherently superior about human creativity when compared to the output of a machine.

This view went entirely unchallenged, and even the gentlest of questions to a couple of panel members at the reception after the event provoked an outraged spluttering of Prosecco.

So here’s the thing: I am not yet convinced that a computer could not create Art.

AlphaGo’s stunning victory over (human) champion Lee Sedol at the game Go by using an unprecedented strategy – making a move that baffled commentators, defied human logic and experience and then going on to win the game – seems to me to be an act of true creativity.

That was in 2016 – and I have a feeling that in years to come Move 37 will be seen as a real turning point in the evolution of AI – the point at which an AI was demonstrably creative, and where human experts could predict neither its strategy nor the outcome.

That moment was actually captured on film in an amazing documentary called AlphaGo and I whole-heartedly recommend you watch it. Just don’t have a glass of Prosecco in your hand.

Leave a Reply

Your email address will not be published. Required fields are marked *