Animated LogoGöksu Safi Işık Attorney Partnership Logo First
Göksu Safi Işık Attorney Partnership Logo 2Göksu Safi Işık Attorney Partnership Logo

Insights
GSI Articletter
GSI Brief

Navigating The New Frontier: Contractual Dynamics For AI Contracts

2025 - Winter Issue

Download As PDF
Share
Print
Copy Link

Navigating The New Frontier: Contractual Dynamics For AI Contracts

AI Consultancy
2025
GSI Teampublication
00:00
-00:00

ABSTRACT

This article explores the impact of artificial intelligence (AI), especially generative artificial intelligence (GenAI) on traditional software arrangements; in this article, we’ll guide you through certain key considerations that should be taken into account when negotiating GenAI agreements and the allocation of risks associated with GenAI in a practical, hands-on manner.

I. INTRODUCTION

As AI evolves, understanding how to navigate contractual arrangements and distribute risks associated with AI can be daunting. The rapid and exponential advancement of AI technology introduces a new dimension of complexity and opportunity into software contracts. 

While, AI, especially GenAI becomes increasingly prevalent, new contract types have emerged, such as those for AI tools supply contracts, provision of AI as a service, and AI tool development contract. 

In traditional software contracts, where suppliers either develop or supply software solutions (either as a service or as a product), critical factors during contract negotiations include exclusivity, liability, warranties, service levels, intellectual property, third-party software, personnel, protection of customer data, security, and indemnities. The emerging contract types, however, introduce distinct challenges, necessitating updates to traditional contract clauses and the inclusion of new terms. Examples include provisions related to restrictions on using customer data for training, error management (noting that the tool may be offered on an “as-is” basis without guarantees of continuous or error-free operation), measures to combat bias and related liability, and requirements for record keeping and auditing of the AI tool. 

AI contracting becomes even more challenging where the supplier does not develop or supply AI tools “as such,” but rather uses AI tools to develop software, which is then supplied to customers.

II. KEY CONSIDERATIONS ON AI TOOLS, USE OF AI TOOLS AS PART OF SERVICES AND AI TOOLS DEVELOPMENT CONTRACTS PROVISIONS

In today’s dynamic business landscape, companies are now looking to their service providers/ suppliers to design and explore AI use cases to streamline operations, enable more efficient capacity and less duplicative resources, cut costs, access specialized expertise, and finally to get out ahead of competitors. While senior managements are rushing to leverage AI (particularly GenAI), compliance and legal teams are rushing to understand the risk implications of AI usage, from confidentiality and intellectual property issues to quality and performance concerns and put guardrails in place to ensure “responsible” use of AI. 

As companies are understanding the risks and implications of AI, developing broad and strict AI policies may be the best defense, but as service providers/ suppliers demonstrate safe use cases the requirements may soften. Regarding concerns of security in the cloud, we are seeing, for instance, some providers/ suppliers looking ahead and proactively offering terms that demonstrate how they use AI in a responsible manner attempting to allay at least some concerns. 

Although the specific implementation of any of these clauses will of course be subject to negotiations between the supplier/ developer and the customer and depend on the exact nature and subject matter of the contract, on applicable laws and on the priorities, business model and business strategy of the contracting parties, in the following sections, we delve into key concepts pertinent to contracts involving the supply of AI tools, provision of AI-as-a-service, and the development of AI tools. We will also try to illuminate essential considerations for each agreement type, including the nuances, negotiation points, and motivations of the parties involved. 

A. Defining “Artificial Intelligence”

The use cases for AI applications are increasing exponentially—from generative AI tools to analytical and reporting AI tools such as transaction monitoring and risk management visualization. The parties, particularly customers, may prefer to avoid a narrow definition of AI; a broad contractual definition would capture any algorithmic, interpretive, machine learning, or other AI processes. However, in doing so, certain other requirements such as disclosure and quality checks may be challenging or at least require diligence and work, i.e., resources and time.

B. Disclosure & Due Diligence of AI Use

Depending on the definition of AI, the complexities of determining where and how GenAI is currently being used by the service provider (that is to say, how operational decisions and processes utilize or are based on the outputs of AI) and whether there is appropriate disclosure and understanding of such use may be an issue. There are many reasons why a customer must understand where and how AI is used in its services and environments. Particularly where the AI processes train large language models (LLMs) using company data or a company’s customer data, a service provider may be contractually obligated to keep the customer generally informed of AI usage as part of its services. 

A customer may also seek a right to receive specific information on the use of AI as part of the services, on request. A service provider might counterbalance such a right with protecting its commercially sensitive business information and the confidential information of other customers whose data sets are used to train the AI tool or that benefit from the AI tool. 

If a party detects issues with the use or output of AI, then each party will likely seek a mutual obligation to be promptly notified. Key points of negotiation may include the scope of “issues” such as data breaches, inaccurate, biased, or unrepresentative outputs; time period and scope of notification; and consequences of any issues: a remediation plan or suspension right for the use of AI or the services as a whole. 

The customer may also have an acceptable AI use policy for its supply chain to which it may require the service provider/supplier to adhere.

C. Accuracy and Reliability Issues

The risk of overreliance on technology is not a ‘new’ risk – but it is arguably more acute with AI, given the lack of transparency, explainability and auditability of certain types of AI. As with any software-based solution, AI poses risks if it does not produce accurate, reliable results. Thus, before relying on AI tools, businesses will want to know, for example, whether the original data on which the AI was trained was itself reliable, whether there have been any independent evaluations of the tool’s reliability and accuracy, the extent to which there will be a “human in the loop” to check for anomalous outputs, the extent to which the service provider or a supplier is prepared to accept liability for errors caused by AI and what remedies it will offer if outputs are wrong. 

To address these issues, customers will need to build into the contract sufficient information-provision, testing and audit requirements to ensure that the AI used in the provision of the services, at least to the extent that it is interacting with, or impacting, humans or making important decisions, is explainable. 

Where services utilize AI, customers may also expect service providers to ensure that a provider’s use of AI will not degrade the contractual standard for performance of the services; produces accurate and representative outputs and does not take into consideration certain protected characteristics, unless the customer has provided preapproval; and does not develop harmful or inappropriate behaviors.

The extent to which a service provider is able to meet these expectations will depend on various factors, including how the applicable AI tool is procured and how it is trained on data sets. For example, if such data sets are provided by the customer, then a service provider may seek to carve out errors or inaccuracies in the training data sets from its responsibility. 

Some technical standards may be helpful in providing some degree of comfort that the supplier’s approach to AI is sufficiently robust. However, it is important to note that when a supplier says it is “compliant” with a particular standard, this is not the same as saying that it has been through a full certification process. Certification requires auditing by an independent third party, which involves a rigorous procedure and can be quite a lengthy and expensive process. “Compliant” means that the supplier adheres to the relevant standard and whilst this can be evidenced (perhaps confusingly) by a “certificate of compliance”, the latter relies on internal audits and self-assessments. So, whilst compliance offers some degree of assurance, much depends on how much you trust the supplier to have been rigorous in its own self-assessment.

D. Compliance with Laws

Given the wide range layers of regulations concerning AI such as protection of corporate and personal data, product liability, e-commerce rules, or even attorney client privilege, as the case may be, the contractual allocation of compliance responsibility within the AI ecosystem is becoming increasingly important. Broadly, responsibility for ensuring an AI tool does not violate applicable laws may fall on the party providing the dataset(s) that train the AI tool. A key negotiation point may be whether it is the service provider’s responsibility to not cause the customer itself to violate applicable laws through its use of the AI tool, or whether the customer alone is responsible for its own compliance obligations (e.g., sector-specific regulations). 

Further, if an AI tool is used to collect or process personal information, then it is crucial to ensure that this data is handled in accordance with relevant privacy laws and regulations. There are ways to potentially navigate risks through anonymization and de-identification, the use of privacy policies, and contractual provisions; however, close attention should be paid to whether AI has the right to use data in an AI system and how the system uses and discloses information.

E. Ownership and Licensing of Intellectual Property Rights

The top-of-mind issue arising in connection with ownership and use rights when leveraging GenAI is the ownership of intellectual property rights in the layers of input to the AI and the outputs generated from the AI tools. The ability of a company to demonstrate chain of title to input and output is critical for a number of reasons, including in situations where a company wants to sell a product, asset, or potentially its business. Each party will expect the other to stand behind the intellectual property rights that it contributes, typically through indemnification against third-party claims of intellectual property infringement. This tension is the focus of much negotiation in the current AI intellectual property allocation landscape. 

If, however, the underlying concern is the ability to use the outputs without any restriction, then this could be achieved through licensing terms. It is also important from both a legal and practical perspective to consider licensing arrangements in the event of a termination of use of the AI tool, whether planned or sudden, in order to minimize service disruption. As ever, it is crucial to prepare for the end of the relationship right at the beginning of it. On exit, there will be similar “lock-in” risks to those that apply for other proprietary software toolsthe service provider may be unwilling to provide information or license its AI tool to a replacement provider – but there may be additional issues with data migration to consider in this context as well. It may not be possible to provide training data (e.g. belonging to third parties) to a replacement provider, nor to extract customer data that is held in a data lake. It will be important to mitigate these issues in the contract. 

Furthermore, a clause will need to be included on usage restrictions and other license terms of the AI tool itself or of the generated output. The supplier may want to limit the use of generated output to internal use only (and thus prohibit any further commercialisation).

F. Data Sources to Train Large Language Models

The negotiation of GenAI agreements demands a meticulous examination of data sources used to train LLM and generate output. This scrutiny is essential to prevent future practical and legal complications. A comprehensive legal review should assess whether collected data adheres to legal requirements for machine-learning purposes. This assessment requires a deep dive into the company’s existing terms of service, privacy policy statements, and other customer-facing contractual terms to ascertain the permissions obtained from customers or users. Different types of data raise distinct consent and liability issues. Personal identifiable information, synthetic content generated by other AI systems, and third-party intellectual property all require careful evaluation. 

To proactively address these concerns, many providers implement data minimization, utilizing only necessary data, and transparently explain how LLMs use data for training. This transparency promotes trust and minimizes potential legal challenges. 

Service providers can enhance their offerings by training AI on aggregated customer data. However, customers are understandably hesitant to share commercially sensitive information that could benefit competitors. Confidentiality, data protection, intellectual property rights, information security, and post-termination provisions warrant close examination to ensure adequate safeguards for customer data. 

Further complexities arise when determining ownership of AI improvements stemming from training data. Various approaches exist, ranging from complete data segregation, where customers own all rights to improvements, to “AI as a service” models where the customer contributes to a shared data pool and the service provider retains ownership of the AI and improvements. Hybrid approaches, striking a balance between customer data protection and service provider innovation, can also be explored.

Developers and providers must exercise caution when using copyrighted content for model training. Infringing third-party intellectual property rights in training data risks output infringement if the output substantially copies the training data. Licensing emerges as the best strategy to mitigate copyright infringement claims. To address concerns, some developers offer to defend customers against copyright infringement claims arising from their AI assistant’s outputs. Such intellectual property protections could become an industry standard, fostering confidence and reducing legal risk. 

These considerations significantly impact the potential liability of suppliers, providers, and developers, influencing their negotiation strategies for indemnification, representations, and warranties in contracts. Customers will demand clarification on training data sources and seek contractual protection against third-party infringement claims. This could include warranties ensuring that AI training processes and outputs do not infringe third-party intellectual property rights or, alternatively, the implementation of technical measures or tools to minimize infringement risks. 

Finally, when AI interacts with third-party software tools, customers should verify that third-party licenses support such use. For example, if licenses are based on human users, additional licenses might be required for AI utilization. This verification safeguards against potential legal disputes and ensures compliance with licensing agreements.

The negotiation of GenAI agreements is a complex process requiring careful consideration of data sources, intellectual property, and liability issues. By proactively addressing these concerns through transparent practices, robust legal frameworks, and effective contractual provisions, stakeholders can navigate the evolving landscape of GenAI and foster responsible innovation.

G. Liability

As there are many parties involved in an AI system, who will be held liable may be difficult to establish and there are many factors that should be taken into consideration. These factors include whether the AI system was following instructions, whether damage can be traced back to the design or production of the AI system and whether the AI system provided any general or specific limitations. Contributory negligence will also be considered as a factor here. 

Negotiating liability provisions in AI contracts requires careful consideration of the unique challenges posed by this technology. While many principles align with those in broader technology and services contracts, AI presents specific risks that warrant dedicated attention. In traditional technology transactions, suppliers often seek to limit their liability for breaches of representations and covenants, such as those related to documentation, quality, or third-party consents. This typically involves capping liability for damages or outlining exclusive remedies like repair, reperformance, or specified credit or liquidated damages payments. However, AI introduces new complexities. One key challenge is the attribution of fault, particularly in scenarios involving human-AI interaction. For example, in the context of robotic surgical assistance, determining responsibility for an adverse event can be complex, raising questions about how to apply contractual provisions to such situations. 

AI’s potential for widespread application also necessitates consideration of scenarios where bodily injury or harm to individuals is a risk, such as AI-driven medical procedures or autonomous vehicles. Moreover, AI’s ability to fail in systemic yet difficult-to-detect ways presents a unique challenge. Damages could accumulate rapidly before either party realizes an issue exists. 

These factors necessitate careful consideration of two crucial contract provisions: disclaimers and limitations on liability. In standard technology contracts, suppliers often disclaim implied warranties and restrict remedies for quality or non-compliance issues. This practice is most likely to extend to AI contracts, with suppliers seeking to ensure customers assume risks associated with AI use, particularly in human-AI interactions or high-risk applications. This could involve contractual obligations for customers to understand AI features, proper use, and inherent risks, and to confirm receipt of related documentation. 

Customers, however, should avoid accepting overly broad disclaimers that completely absolve suppliers from liability for AI-caused harm. If customers can secure adequate documentation and quality assurances, they may be able to negotiate exceptions for express warranties. Additionally, customers should ensure that limitations on remedies do not preclude other claims arising from the same transaction and avoid language that restricts remedies when the customer is partially at fault, given their limited AI expertise. 

In AI contracts, suppliers’ liability limitations will largely mirror those in other technology agreements. However, customers face unique challenges due to the nature of AI. They should avoid limiting suppliers’ liability for consequential damages arising from AI law violations, particularly given transparency issues and the limited understanding of AI among most users. Similarly, customers should avoid limiting liability for damages caused by AI suspension due to regulatory actions traceable to the supplier. 

While both parties may seek to limit liability based on fault, attributing fault can be complex in human-AI interactions. A third-party neutral or an alternative dispute resolution mechanism could be employed to facilitate fault attribution. 

Even with liability exclusions, customers may reasonably expect suppliers to defend against third-party claims, leveraging their superior AI knowledge. An independent fault attribution mechanism can foster collaboration in defense efforts and facilitate equitable cost allocation. 

In conclusion, negotiating liability provisions in AI contracts requires a nuanced approach that balances the unique challenges posed by this technology with the need for clear and enforceable contractual terms. By carefully considering the risks inherent in AI, both suppliers and customers can ensure that their agreements adequately protect their interests. However, it is also reasonable to consider limitations of liability in favor of suppliers, given the evolving nature of AI technology. Suppliers may argue that due to the nascent stage of AI development, certain limitations on liability are justified to encourage innovation and investment in AI technologies. This balanced approach ensures that while customers are protected, suppliers are not unduly burdened, allowing for a fair distribution of risks and responsibilities.

III. USE OF AI TOOLS TO DEVELOP SOFTWARE

The legal implications of AI contracting can be especially complex when a supplier utilizes AI tools not to directly provide AI services, but to develop software that is subsequently delivered to customers. In such scenarios, the contractual terms governing the AI tool become paramount, as the supplier might be reluctant to make guarantees to its customer that extend beyond those terms. For instance, if the AI tool provider does not offer indemnification for third-party intellectual property infringement arising from the generated output, the supplier may be similarly unwilling to provide such assurance to its customer. Conversely, even with an indemnification clause in the AI tool contract, the extent to which it applies to modifications made by the supplier during software development remains a question. 

Similarly, the ownership of the generated output can pose challenges. If the AI tool disclaims ownership of generated output without guaranteeing non-infringement of third-party intellectual property rights, the supplier’s contract with the customer may lack similar warranties, potentially leaving the ownership of the final software unclear. 

Finally, the use of open-source software by AI-powered development processes further complicates matters. Typically, suppliers can control the use of open-source software in software development, mitigating copyleft license risks. However, when an AI tool operates as a “black box”where training processes and data are obscured from the userit becomes difficult for the supplier to offer similar assurances regarding open-source software usage.

IV. CONCLUSION

In conclusion, the intricate task of balancing a company’s need for improved quality checks with the goal of achieving cost savings through GenAI solutions presents a notable challenge. Nevertheless, these obstacles can be addressed with advanced technology and meticulous human supervision. GenAI applications, from predictive analytics to virtual assistants, are revolutionizing traditional software contract models, offering unique opportunities while also introducing new legal challenges. 

The emergence of AI contracts brings numerous contractual issues, particularly in the realm of intellectual property. This necessitates that both suppliers and customers thoroughly review their existing standard software contracts and devise innovative, unconventional solutions, requiring compromises from both parties. It is evident that adopting an AI-centric approach is crucial for both suppliers and customers to fully capitalize on the benefits of AI contracting. 

AI tools can significantly impact outsourcing and other service contracts, promising substantial gains in cost, time, accuracy, scalability, and productivity, benefiting both negotiating parties. To harness these advantages, it is essential to remain vigilant about the “new” risks associated with AI usage in these arrangements. As discussed in this briefing, some of these risks may be entirely new, while many are reconfigurations of familiar challenges. As AI technology continues to evolve rapidly, and the full extent of its risks remains uncertain, there are established contractual mechanisms in service contracts that parties can employ and develop now to mitigate known risks and incorporate flexibility for yet-to-be-understood risks. Businesses should aim to proactively address these issues.

  • Summary under construction
Keywords
Artificial Intelligence, Generative AI, Large Language Models, AI Tools Supply Contracts, AI as a Service Contracts, AI Tool Development Contracts.
Capabilities
AI Consultancy
AI & Disruptive Tech Legal Consultancy
More Insights

Articletter / GSI Brief

GSI Brief & Legal Brief

GSI Brief 204

Gsi Brief 204

Brief
Read more
GSI Brief 205

Gsi Brief 205

Brief
Read more
GSI Brief 206

Gsi Brief 206

Brief
Read more
GSI Brief 207

Gsi Brief 207

Brief
Read more

Articletter - Winter Issue

THE SIGNIFICANCE OF DIGITAL EVIDENCE IN THE LEGAL DOMAIN

The Significance Of Digital Evidence In The Legal Domain

2025
Read more
EVALUATION OF OUTDATED PUBLIC RECORDS BY THE ADMINISTRATION IN TERMS OF KVKK

Evaluation Of Outdated Public Records By The Administration In Terms Of Kvkk

2025
Read more
THE RIGHT TO BE FORGOTTEN ON THE INTERNET

The Right To Be Forgotten On The Internet

2025
Read more
INHERITANCE OF DIGITAL ASSETS AND CRYPTOCURRENCIES

Inheritance Of Digital Assets And Cryptocurrencies

2025
Read more
Navigating The New Frontier: Contractual Dynamics For AI Contracts