AI's Terms of Use
There’s a ton of talk about artificial intelligence liability in legislatures and executive offices. Congress is holding hearings. The Administration is proposing executive action preempting state law, while California has pushed transparency mandates for “frontier models,” and Ron DeSantis positions himself as an AI skeptic and defender of traditional state prerogatives. The shared premise of all of this activity is that the law has not yet decided who bears responsibility when AI systems cause harm. And political gurus urge ambitious politicians attend the public’s AI skepticism and propose new legal regimes to cabin AI harms.
But reading OpenAI’s terms of use — and a great piece from Olga Mack at Stanford CodeX surveying the field — I got the sense that this debate is missing something pretty important. The allocation of AI liability is already being set in the contractual boilerplate of enterprise AI and AI-enabled consumer services agreements.1
Those agreements redefine performance, disclaim reliance, cap liability at trivial levels, and shift regulatory exposure downstream. Now, it’s true that it’s not clear that this attempt at private governance will hold up under pressure —and that pressure will be sometimes felt in court as only some of the relevant terms shunt disputes to arbitration.2 But it strikes me as kind of wild that these foundational documents have gotten so little public scrutiny.
I. Accuracy Is No Longer a Contractual Obligation
Start with the most basic question contract law asks: what has the seller promised to do?
In AI terms that I looked at, the answer is increasingly: very little.
Consider the Onit Services Agreement, governing AI tools marketed for legal and contract analysis. The agreement provides:
“The AI Features and any Output are provided ‘as is’ and ‘as available.’ To the maximum extent permitted by law, Company disclaims all warranties, express or implied, including accuracy, merchantability, fitness for a particular purpose, non-infringement, and error-free or uninterrupted operation with respect to the AI Features and Output.”
This is not a narrow disclaimer. Accuracy, fitness, and reliability are not merely limited; they are excluded entirely from the contractual undertaking. The AI feature may work. Or it may not. You can’t sue either way.
Motorola Solutions uses similar language in its standalone AI terms:
“THE AI FEATURE AND AI OUTPUT ARE PROVIDED ‘AS IS’. MOTOROLA DISCLAIMS ALL WARRANTIES REGARDING THE ACCURACY, COMPLETENESS, OR RELIABILITY OF AI OUTPUT.”
Claude says:
Our team works hard to provide great services, and we’re continuously working on improvements. However, there are certain aspects we can’t guarantee. We are using ALL CAPS to explain this, to make sure that you see it.
YOUR USE OF THE SERVICES, MATERIALS, AND ACTIONS IS SOLELY AT YOUR OWN RISK. THE SERVICES, OUTPUTS, AND ACTIONS ARE PROVIDED ON AN “AS IS” AND “AS AVAILABLE” BASIS AND, TO THE FULLEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW, ARE PROVIDED WITHOUT WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED, OR STATUTORY. WE AND OUR PROVIDERS EXPRESSLY DISCLAIM ANY AND ALL WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE, TITLE, MERCHANTABILITY, ACCURACY, AVAILABILITY, RELIABILITY, SECURITY, PRIVACY, COMPATIBILITY, NON-INFRINGEMENT, AND ANY WARRANTY IMPLIED BY COURSE OF DEALING, COURSE OF PERFORMANCE, OR TRADE USAGE.
Well, that’s clear enough.
II. Non-Reliance Clauses for Tools Meant to Be Relied Upon
Disclaiming accuracy is only the first move. The next is to renounce reliance.
Motorola’s AI terms continue:
“Customer is solely responsible for independently verifying all AI Output before reliance or use and acknowledges that such output may contain errors or omissions… Motorola shall not be liable for any damages arising from Customer’s use of, or reliance upon, the AI Feature or AI Output.”
OpenAI’s current Services Agreement is more explicit:
“Customer is solely responsible for all use of the Outputs and for evaluating the accuracy and appropriateness of Output for Customer’s use case.”
And Claude's say:
Reliance on Outputs and Actions. Artificial intelligence and large language models are frontier technologies that are still improving in accuracy, reliability and safety. When you use our Services, you acknowledge and agree:
Outputs may not always be accurate and may contain material inaccuracies even if they appear accurate because of their level of detail or specificity.
Actions may not be error free or operate as you intended.
You should not rely on any Outputs or Actions without independently confirming their accuracy.
The Services and any Outputs may not reflect correct, current, or complete information.
Outputs may contain content that is inconsistent with Anthropic’s views.
Grok pulls no punches.
Output may not always be accurate. Output from our services is not professional advice. You should conduct your own thorough research and should not rely on Output as the truth.
You are responsible for evaluating the Output for accuracy and appropriateness for your use, including using human review and supervision, before using or sharing Output.
Our Service may provide incomplete, incorrect, or offensive Output that does not represent xAI’s views. Outputs are not meant to endorse a person or third-party’s views.
I imagine that this language is aimed at tort claims (on some products-like-ground), and to function as a libel shield. The result is a peculiar posture: systems are sold precisely because they generate analysis, predictions, drafts, and classifications — but the terms that we click to agree to insist that those outputs are informational. Or to put it differently, most of the country’s current economic growth is generated by products that their makers’ legal counsel tells us are functionally toys.
III. Liability Caps That Make Scale Irrelevant
Suppose a customer clears the hurdles above and finds a liability hook anyway. Damage caps do the rest.
One AI-related limitation clause, from a publicly available AI contract clause generator, reads:
“IN NO EVENT SHALL Provider’s AGGREGATE LIABILITY ARISING OUT OF OR RELATED TO THE AI-GENERATED CONTENT… EXCEED THE TOTAL AMOUNT PAID BY Client TO Provider UNDER THIS SERVICE AGREEMENT. IN NO EVENT SHALL Provider BE LIABLE FOR ANY CONSEQUENTIAL, INCIDENTAL, INDIRECT, EXEMPLARY, SPECIAL, OR PUNITIVE DAMAGES.”
This structure is familiar from SaaS contracts, but it’s effect is a little different in the AI context. AI systems increasingly sit upstream of consequential decisions — employment, compliance, finance, healthcare. The contract excludes downstream harm once the fee cap is hit. Most of the contracts I looked at capped liability at $100 or the fees paid in over some reasonably short recent time.
IV. Indemnification Runs Uphill
AI contracts also invert traditional indemnification structures.
Rather than indemnifying customers for harms caused by the system, vendors often require customers to indemnify the vendor for claims arising from the output — even when the output is generated by the vendor’s model. As Perplexity explains:
Indemnification. By entering into these Terms and accessing or using the Services, you agree that you shall defend, indemnify and hold the Company Entities harmless from and against any and all claims, costs, damages, losses, liabilities and expenses (including attorneys’ fees and costs) incurred by the Company Entities arising out of or in connection with: (a) your violation or breach of any term of these Terms or any applicable law or regulation; (b) your violation of any rights of any third party; (c) your misuse of the Services; (d) Your Content; or (e) your negligence or wilful misconduct. If you are obligated to indemnify any Company Entity hereunder, then you agree that Company (or, at its discretion, the applicable Company Entity) will have the right, in its sole discretion, to control any action or proceeding and to determine whether Company wishes to settle, and if so, on what terms, and you agree to fully cooperate with Company in the defense or settlement of such claim.
While the exact language varies by vendor, the structure is consistent: claims arising from use of output belong to the customer, and we have to stand behind it, even if the vendor generated the content.
V. Ownership Without Responsibility
Some vendors emphasize that customers “own” AI outputs. Scale AI’s master services agreement, for example, provides:
“As between Customer and OpenAI, to the extent permitted by applicable law, Customer retains all ownership rights in Input; and owns all Output.”
My own ChatGPT license similarly has a reassuring flavor:
Ownership of content. As between you and OpenAI, and to the extent permitted by applicable law, you (a) retain your ownership rights in Input and (b) own the Output. We hereby assign to you all our right, title, and interest, if any, in and to Output.
Ownership sounds protective, but practice, it functions as a liability-allocation device. As the indemnification discussion illustrates, ownership without warranty means the customer owns not just the output, but the risk that comes with it. The vendor’s control over the model does not translate into responsibility for what the model produces.
VI. Compliance Risk Is Also Outsourced
Finally, AI contracts routinely disclaim regulatory compliance.
Customers are told they are solely responsible for ensuring lawful use of outputs, even where statutes target system behavior rather than user intent. As states experiment with AI governance — transparency mandates in California, discrimination rules in Colorado — contracts shift the risk of failing to comply to the customer.
VII. What This Adds Up To
As Olga Mack explains in her excellent term survey from last year, AI liability policies are broader than the SaaS swamp they emerged from:
The latest data reveals that 92% of AI vendors claim broad data usage rights, only 17% commit to full regulatory compliance, and just 33% provide indemnification for third-party IP claims—all of which contrast sharply with broader SaaS market norms . . .
According to TermScout data, 88% of AI vendors impose liability caps, aligning closely with broader SaaS trends (81%), yet only 38% cap customer liability, compared to 44% in broader SaaS agreements. This imbalance shifts financial and legal burdens onto customers, possibly leaving them with limited recourse for AI failures—whether due to biased hiring decisions, flawed financial models, or security risks . . .
AI’s autonomous and adaptive nature likely introduces risks that static liability caps fail to address, necessitating risk-adjusted liability frameworks based on AI’s level of control, customer input, and regulatory exposure. Legal tech could play a key role in developing contract analysis tools, risk quantification models, and AI liability insurance markets to fill gaps in vendor accountability. As courts and regulators push for stricter AI liability laws, legal professionals and innovators have a unique opportunity to shape fairer risk-sharing models, so that AI vendors assume appropriate responsibility while allowing innovation to thrive.
The question that remains open is whether these contractual shields will hold up. AI companies now face product liability claims, UDAP actions, wrongful death suits, and privacy litigation. But courts haven’t yet seriously tested the terms-of-use defenses sitting in these companies’ back pockets.
Two paths forward seem possible.
First, contract law itself: Do these terms do so much to exculpate firms that they make the underlying obligations illusory? Are they unconscionable? Do they violate public policy in ways that might give harmed consumers a path to recovery?
Second, legislation: Lawmakers eager to regulate AI should ask a threshold question before designing new liability regimes—do we want to live in a world where boilerplate alone defines the relationship between citizens and the systems increasingly making decisions about their lives?
Because right now, that world is the one we’re clicking “I agree” to accept.



Great post. There are some interesting comparisons with the terms of service of social media platforms and some of the FTC investigations/settlements of some early broken promises of the social media companies. I am a little surprised not to see statements that essentially say, hallucinations are inherent in AI machine processing and responsible AI use requires human checks on AI generated output. Customer interaction with AI is within the realm of a "guidance service" and the vast majority of users will be amateurs so human checks are likely to be flawed. My HOA leadership uses AI for most interactions and to see AI generated memos purporting to offer legal advice by gleaning CCRs is so instructive to me on the limits of AI.