February 19, 2026
Contracting for AI in Sports Partnerships
AI is no longer experimental in the sports industry. It is actively shaping how clubs, leagues and broadcasters engage with fans, analyse performance, and in how they curate and create content. Sports partnerships often (and rightly) focus on achieving “win wins” for each party, however when a partnership entails the deployment of AI technologies, the risks and incentives are different, and it requires a particular sensibility to navigate them successfully.
If you are implementing AI in your business, then here are some of the key issues to consider:
1. Establish the AI use case(s) and target markets
It is crucial to understand what AI use case(s) will be developed or fine-tuned (even if just conceptually at the pre-contract phase), and the territories in which it will be deployed and accessed from. AI covers a broad spectrum of technologies, ranging from data analytics through to agentic and generative AI, all of which use of a variety of models and handle data in different ways.
By doing so, you can then determine the applicability of market-wide and industry specific regulations (e.g., (Regulation (EU) 2024/1689, the Artificial Intelligence Act (“AI Act”)). For UK businesses, you will be caught by the AI Act if you develop, provide or use an AI system and the AI generated outputs are then used in the EU (which is extremely broad in scope). And where the AI falls within the EU’s regulatory ambit, a risk assessment should be conducted to determine what statutory obligations will apply (e.g., whether it will constitute a “High-Risk” AI or not).
Where the AI Act applies, it’s worth noting that the transparency requirements (Article 50) cover a broad range of AI systems and that they establish what is in effect a minimum compliance threshold for specific actors (providers/deployers) in the AI market. They cover 1) AI systems interacting with people (e.g., AI agents), 2) the marking of synthetic outputs, 3) notification requirements for emotion recognition and biometric categorisation systems, and 4) disclosure requirements for text that is generated or manipulated so as to inform the public on matters of public interest. There is now a draft code of practice on AI generated content to support compliance efforts, as envisaged by Article 50(7) of the AI Act (the final version of which is due to be published in May this year):
Draft code of practice on AI generated content
The AI use case(s) should also inform the broader contractual protections that need to be prioritised in any supply agreement. This is important as, in the UK, issues like explainability requirements will be more relevant for decision making tools than they would for other use cases.
2. Manage the competitive tension between the contract documents
There is a natural tension between the sponsorship elements of a sports partnership (which would typically be documented on a rights holder’s paper) and the technology supply elements that comprise the partner’s “core” business and which are likely to be documented on partner paper.
Handled well, that tension can be used constructively. For example, from a rights owner’s perspective, what discounts can you secure on licensing and support fees in exchange for the grant of rights? Would any fees be payable at all for the implementation/development phase of the AI system(s)? And can you secure enhanced service levels for the operation of the AI after go-live (if applicable)? Finally, would any rights package encourage the partner to assume a greater level of risk under the technology supply agreements than they otherwise would for deals of a similar size and complexity?
Reciprocal arguments may be run by the partner under the sponsorship agreement. Could they seek alignment on payment terms, liability positions and on the scope of termination rights? And what role does governance play in managing the overall relationship? Finding the right balance between the different agreements is vital to achieving long-term success for each party.
3. Plan the development phase carefully
Clubs, leagues and other rights holders should first consider whether they are fine tuning existing AI capabilities or if they are developing something bespoke from scratch. Each carries different time, cost and risk implications.
The development phase should be documented in much the same way as for any other software development agreement. Is there a clear view of the AI use case(s) being developed or are things less clear at the outset? And does the partner intend to follow a Waterfall or Agile approach for the development?
Agile development methodologies favour the iterative and incremental development of outputs over time, and are increasingly popular (e.g., Scrum, Kanban), as they encourage fluidity with cross-functional and blended delivery teams. This contrasts with the linear, sequential and siloed approaches of Waterfall (whereby a supplier would typically develop software against a specification).
From a lawyer’s perspective, “purist” Agile approaches are tricky to contract for given the difficulty of creating contractual “certainty” around what outputs will be delivered and by when, and what happens if things go wrong. Having blended delivery teams (with “customer” and “supplier” team members) means that it is often difficult if not impossible to attribute fault to either party for delays or other issues as you would do in Waterfall. Therefore, in Agile projects, you may need to consider alternative levers to the traditional rights and remedies that are typically used in Waterfall (i.e., liquidated damages regimes) as a means of holding the partner’s “feet to the fire” in terms of project delivery.
4. Identify key datasets and IP rights
One of the most critical activities is to identify the data that will be required to train the AI (that is if training or fine-tuning is necessary) and that will then be used as inputs for the AI to produce its intended outputs (e.g., text, images, conclusions, recommendations, insights). Below are some of the key issues to consider:
- Personal data – Will any personal data be used as input data or otherwise be required to train the AI system? If so, it is important to establish an appropriate lawful basis for the processing and to ensure that all relevant statutory obligations (e.g., transparency and notification requirements) have been met. And beware of the purpose limitation principle if you intend to process personal data for purposes other than the original purpose(s) notified to data subjects at the point of collection – in that event you would need to assess whether any new processing is compatible with the original purpose(s).
- Copyright works – Where copyright works are in scope for training or input purposes a licence will be required from the rights holder. As things stand, the only exception to that rule is where any text and data mining will be conducted for non-commercial research purposes only (Section 29A of the Copyright, Designs and Patents Act 1988). There have been various proposals to change the UK’s copyright laws purportedly to make the UK more attractive to innovators, however that process has been fraught with difficulty over what is an emotive topic. The legal landscape is constantly evolving as outlined in a recent Level article on AI and copyright, which you can read here.
- Third party data – And finally, if third party datasets are required, you will need to ensure that you have licence terms in place with the relevant licensors which would permit the use of their data as inputs to train or operate the AI. Many vendors contractually prohibit their data (and other IP) from being used in AI technologies, and so their explicit consent should be obtained upfront.
5. Negotiate key service levels (SLAs) for run phase
Will the partner support the AI system after go-live? Or will responsibility for its maintenance and other issues fall on the rights holder? It is imperative to grasp these operational requirements up front. And from an infrastructure standpoint, will the AI be hosted on your proprietary data centres or in the cloud? If the latter, that would bring into play important metrics related to the provision of the cloud environment (e.g., uptime availability and response time SLAs).
Whether bespoke SLAs are required will hinge on the AI use case - for example, in performance analytics or decision-making tools, you would ideally include accuracy SLAs to address the risk of bias/errors in solution-generated outputs.
6. Cross-termination
It is important to work through what would happen if the partnership collapsed due to a catastrophic break down in the relationship. Should a rights owner remain wedded to the partner’s technology in that scenario?
Cross-termination rights would allow for the termination of separate agreements (which together comprise the “partnership”) in defined circumstances. But at what threshold should a party be entitled to terminate all agreements (including the technology supply elements)? Should cross-termination apply to all termination events or just a subset of them? Technology providers may resist the concept of cross-termination on the basis that it could undermine their ability to recognise revenue from the licensing arrangements.
7. Exit strategies to avoid lock-in
Exit planning should begin during contract negotiations and be documented in the main agreements. This will help to avoid unwanted vendor “lock-in” after expiry or termination of the partnership. Ideally this should include (at minimum) data access and data deletion/deactivation requirements. And depending on who owns the IP rights in different AI components, the parties will need to establish how (if at all) the technology provider can reuse any model “learnings” with third parties.
About the author
Harry is a Partner in the Technology and Commercial Team at Level, working with a variety of clients across all sectors on AI and other technology matters.