November 06, 2025
AI Training on Trial: Key Lessons for Creators from Getty Images v Stability AI
When Getty Images v Stability AI was heard in the High Court this summer, a number of key legal issues that AI systems are currently creating for rightsholders were finally put in the spotlight. Despite finding in Stability AI’s favour overall, this week’s judgment does provide some reassurance and clarity for trade mark owners, albeit leaving many of the core questions for those operating at the intersection of IP and AI unanswered.
This article explores the Getty Images v Stability AI case in detail to explain the main issues that stakeholders are facing in the field of generative AI and copyright law. To conclude, we provide 5 practical takeaways on how our clients can navigate the ongoing legal uncertainties.
The case: Getty Images v Stability AI
In 2023, Getty Images (Getty) sued Stability AI (Stability), a UK-based artificial intelligence company best known for its text-to-image function, for copyright and trade mark infringement. Getty’s initial claim was that Stability had scraped a vast quantity of copyrighted photographs from Getty’s database to train its image-generation system, Stable Diffusion, arguing that the system had “memorised” Getty’s images to allow (partial) image reproductions. However, Getty withdrew their primary copyright infringement claim towards the end of the High Court trial after struggling to establish sufficient connection between the scraping of data to train the model and restrictions imposed by UK copyright law, partly because most of the AI training occurred on US servers. Stability’s defence also maintained that images on its Stable Diffusion system are created using random noise images “without use of any part of any copyright works”. Getty’s argument was ultimately reduced to a secondary copyright infringement claim; that the use of the AI model itself (rather than the AI training which took place overseas) was an ‘infringing article’ under the UK’s Copyright, Designs and Patents Act 1988 (CDPA).
On Tuesday 4th November, High Court judge Mrs Justice Joanna Smith found that some instances of trade mark infringement occurred when some of Stability’s earlier AI models generated images containing a Getty watermark. However, the judge herself described these findings as “extremely limited in scope” and could not find a basis for Getty’s damages claim. She also denied Getty’s secondary copyright infringement claim, finding that Stable Diffusion is not an ‘infringing article’ under the CDPA. The outcome of the trial balances protecting the rights of trade mark owners when AI systems reproduce images containing protected marks, and providing some reassurance to AI developers that importing to the UK a model trained on copyrighted content without storing or reproducing that content may not amount to the importation of an infringing article.
Why this case matters
The Getty case provides a real-world example of the growing concerns of rightsholders in relation to the widespread and unauthorised use of copyrighted content in AI training. It highlights key challenges rightsholders face, such as enforcing ownership over, and receiving remuneration for, copyrighted content when it is scraped and used to train AI models. It also sheds light on other significant risks to IP associated with data scraping, including trade mark infringement, disincentive to innovate, and unauthorised brand development via unapproved content distribution. The unfolding of the Getty case was also timely amidst the Government’s proposals published earlier this year regarding the new copyright data mining exception, which proposes to allow generative AI to train on UK copyright works without a licence, unless creators ‘opt-out’.
AI and copyright within Level’s key sectors
Whilst the Getty case spotlights the battle over copyright images, other recent news exemplifies that similar challenges are being faced by the broader media industry and even further afield…
MUSIC: Pulp’s ‘Spike Island’
- In April this year, the unfolding complexities surrounding authorship of AI-generated content within UK copyright law were exemplified by the music video released alongside Pulp’s new record Spike Island.
- Frontman Jarvis Cocker admitted to feeding still images of the band into AI with a ‘prompt’ in order to animate them against the backdrop. Amongst a split reaction from fans, questions emerged regarding the identity of the video’s true author, and whether this would be the image’s original photographer Rankin & Donald, the owner of the AI-system, or the AI-user, Cocker himself.
SPORT: Amazon’s ‘Prime Vision’ in UEFA Champions League
- The Tottenham Hotspur vs Villareal game on 15 September was the first to feature Amazon’s new AI-generated overlay feed ‘Prime Vision’, providing viewers with real-time match statistics and insights captured by optical tracking systems.
- In the hopes of capitalising on the untapped market of producing and streaming sports matches, Amazon’s technology has also put the spotlight onto the issue of identifying whether the sports club, league, or Amazon themselves are the owners of the data-driven video copyright.
UK Government’s Copyright and AI Consultation Paper
- In December last year, the UK Government announced proposals for a new copyright exception for generative AI training that would allow AI companies to train on British copyrighted works without a licence, unless creators ‘opt-out’ of the regime. The scheme has been criticised for putting the onus on creators to assert their rights, despite UK copyright law protection being automatic (no need to register works to gain protection).
- Critics of the scheme include music icons Elton John and Sir Paul McCartney, chief executive of UK Music Tom Kiehl, and Simon Cowell. Last week, these individuals were joined by author Sir Philip Pullman who expressed concerns about the scraping of writers’ books.
- In response to industry concerns, the Department for Science, Innovation and Technology has created expert working groups to help deliver a fair solution that supports AI development and protects the UK’s creative industries.
- The work of the expert groups will likely inform decision-making related to the UK’s new AI bill; set to be introduced in 2026.
Industry responses to AI training on copyrighted content
The table below summarises some key concerns from across Level’s main industries:
Music |
AI models are increasingly being trained on copyrighted music without permission, leading to concerns over renumeration. Musicians are also able to use AI music creation tools such as Soundverse AI Song Generator, which raises complexities regarding composition ownership. Musicians are advised to ensure they can demonstrate human contribution to their work to secure copyright protection. In the UK and EU, this is determined using the ‘intellectual creation’ test, meaning the content should contain ‘human personality’ and ‘free and creative choices’. |
Visual art |
Creators of visual arts are also at risk of having their work appearing in AI-generated image databases, which could lead to unauthorised reproductions of their content. This poses similar risks of lack of renumeration, brand dilution, and disincentive to create. |
Film & TV |
The British Film Institute recently partnered with Goldsmiths, Loughborough and Edinburgh universities to report on the impact of generative-AI on the film sector. Key concerns included the use of on-screen content and scripts in training AI models, but the report noted the potential for the UK to pioneer the formalisation of IP licensing for AI training and foster successful partnerships between rightsholders and AI developers. |
Sport |
As well as powering new methods of fan engagement in sport, and aiding in athletic performance analysis, AI is being used to tackle online abuse of UK athletes. Attention must be paid to the issues of ownership of AI-generated insights and data privacy that are arising alongside these innovations. |
AI developers |
There are complexities for AI developers when navigating UK copyright law. The UK Government is concerned that this uncertainty is hindering investment in, and the adoption of, AI technology. |
What should our clients be thinking about, now and in the future?
It is clear that generative AI is posing significant challenges to clients and creators. To avoid the issues brought to light by Getty Images, the following points highlight some key takeaways to bear in mind, whether you want to use AI to generate content, or have concerns about your copyrighted work being used in AI training:
- Understand your copyright and trade mark rights, and the exceptions that permit use of copyrighted work without authorisation.
- Ensure your work is capable of being protected by copyright, design or patent law by meeting the requirements for human contribution.
- Consider additional protection of copyrighted work, such as via a trade mark or perhaps registration in overseas territories like the US or China.
- Request licences to use your work.
- If available, opt-out of allowing your content to be used in AI training and, if not, clarify through terms and conditions and similar means that consent is not given to the use of content for AI training.
If you have concerns about your copyrighted work being used in AI training or any other issues highlighted in this article, please contact Nick White, Level’s trade mark specialist.