Agentic...Zombies?
A summary of an interesting recent paper
James Toomey, who teaches contracts at Iowa, has a forthcoming Harvard JOLT article called Zombies, AI, and the “Objective” Theory of Contracts. I read it recently and want to recommend it. It’s short, funny, and takes a piece of canonical doctrine that everyone teaches and almost nobody believes, and asks whether the AI moment can finally let us say joojooflop about the whole shebang.
I think the paper gets one big thing right and one less right, and the less right bit is probably where the action is going to be. So this post is partly a recommendation and partly a complaint, in roughly that order. I’ll also flag, by way of disclosure, that I have a recent white paper with Bridget McCormack at the AAA that takes the opposite end of this same problem.
What Toomey Argues
Start with the doctrine. Holmes told us that “[t]he making of a contract depends not on the agreement of two minds in one intention, but on the agreement of two sets of external signs—not on the parties’ having meant the same thing but on their having said the same thing.” Lucy v. Zehmer says it more bluntly: “[t]he mental assent of the parties is not requisite for the formation of a contract.”
Toomey says that generations of 1Ls have been taught that the “objective theory” won the twentieth century and the “subjective” alternative is some quaint thing French law still does. (Though, obviously, A exams know that in fact Lucy itself is a mixed-methods case: not only must a reasonable person believe the communicated words create the contract, but the plaintiff has to subjectively share that belief: that two part test is the law, not the Holmesian version.)
Toomey’s setup is straightforward. Take the objective theory at its word and now imagine two large language models exchanging the following:
LLM A: “I can do 1,000 widgets for $1,000.”
LLM B: “You got a deal!”
Show that to a reasonable third party and they’ll tell you it sure looks like manifested assent. The objective theory therefore says you have an enforceable contract — that the state ought to spend taxpayer dollars enforcing it, “by the state’s monopoly on violence if necessary.” Some scholars are happy to take that conclusion seriously and run with it. Toomey thinks this is “rather profoundly out to lunch.”
The argument has two moves.
The first move is that LLM outputs may not mean anything at all. On one prominent view in philosophy of language, meaning just is what a speaker is intentionally trying to communicate — and LLMs aren’t trying to communicate anything. Even on the more popular view that well-formed sentences carry “literal meaning” independent of any speaker, everyone in the debate agrees that some intentional source is needed for a string to count as a communicative act in the first place. As Toomey puts it, if cloud shapes happen to spell out “If you build it, he will come,”1 that doesn’t mean anything. LLMs are, on this account, parrots without bodies.
The second move is normative. The standard justifications for contract enforcement — autonomy, reliance, relational solicitude, market insight — all make ineliminable reference to the mental states of persons. None of them apply to LLMs. So even if you could enforce the literal meaning of zombie agreements, there’s no reason to bother.
From there Toomey reaches for the alternative offered by the late, great, Professor Larry Solan: the “objective theory” isn’t a normative commitment to ignoring mental states; it’s an evidentiary rule designed to enforce genuine subjective agreements while shielding against fraud. The reason we don’t let Zehmer escape his deal by claiming he was “just a bunch of two doggoned drunks bluffing” isn’t that we don’t care whether he meant it. It’s that we don’t trust his self-serving post-hoc account.2 The point of the rule is to enforce what people actually agreed to, given what we can observe and what they’d be incentivized to fabricate.
Toomey’s twist is that the LLM hypothetical functions as a clean test of the strong form of the objective theory. If the strong reading entails enforcing zombie contracts, and zombie contracts shouldn’t be enforced, the strong reading must be wrong. Solan’s evidentiary reading survives.
I am basically with Toomey on the theory. And it’s certainly true that American contract law has a mixed perspective on whether we care about parties’ real meaning and intent. The honest description of the doctrine can sometimes be closer to Solan’s evidentiary story than to the high-modern behaviorist one professors recite. Although doctrine is messy, contingent, and resists generalization. All cases that might exist probably will!
Where we part ways
The trouble is that Toomey’s confidence rests almost entirely on the easy hypothetical — two LLMs randomly emitting strings at each other, neither hooked into anything a human cares about — and the easy hypothetical is not the case decisionmakers are going to see.
Three problems.
First, there’s alot of statutory law on this that Toomey doesn’t really wrestle with it. UETA Section 14 — adopted in 49 states — provides that “[a] contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.” The official comment explains that “when machines are involved, the requisite intention flows from the programming and use of the machine.” The federal E-SIGN Act says the same thing. These statutes don’t ask whether the AI has a mental state. They legislate an attribution rule: whatever the agent did, in legal effect, the deployer did.
That’s a substantial answer to Toomey’s philosophical problem, and the paper waves it off. He’s of course free to say (as he does) that this attribution is “fictional” and that the fiction needs normative justification of its own. But to readers of UETA’s drafting history, the fiction is doing exactly the work Solan would want it to: it solves the evidentiary problem of figuring out which human is on the hook for an automated transaction. A philosophical argument that culminates in “we should not enforce contracts formed by electronic agents” reads, in 2026, like a claim that 49 state legislatures got it wrong.
Second, the empirical bracket is doing too much work. Toomey concedes, in a single paragraph in Part III.B, that algorithmic trading is “already happening” and assumes the whole space can be handled by a clean “incorporation by reference” move: Human A and Human B agree to be bound by the output of LLM C, and contract law enforces their agreement, not the LLM’s. Fine for the easy version. But the assumption that every real-world case reduces to that structure is a stipulation, not an argument.
Consider the cases that are coming. In the paper that Bridget McCormack and I just circulated, we walk through four fact patterns that seemed to us very likely to arise in the near future:
The ratification class action. An enterprise deploys a procurement agent with broad authority. Over a weekend, the agent completes 12,000 transactions across hundreds of suppliers. A configuration error caused the agent to accept an indemnification clause running in the supplier’s favor. On Monday, operations proceed normally — goods accepted, shipments moving, invoices paid. Black-letter ratification doctrine says the principal who knowingly accepts the benefit of an unauthorized act has ratified it, including the terms she never saw. The defense — “I didn’t know” — runs straight into willful blindness.
The consumer protection action. An agent acting for consumers accepts warranty disclaimers that fail Magnuson-Moss disclosures or terms that violate state UDAP statutes. Per-violation penalties, fee-shifting, and class-action mechanisms all compound at machine scale.
The evidentiary contest. Two agents negotiate dynamically. When something breaks, discovery produces the published terms, the negotiated modifications, the agents’ logs, and the platform’s configuration. Which document is the integrated agreement? Does the dispute-resolution clause in the published terms reach the negotiated modifications? Nobody has a standardized answer because nobody built the record-keeping protocol in advance.
The liability cascade. A single failed transaction touches the buyer, the supplier, the procurement platform, the agent framework provider, the model provider, the tool/API vendors, and the payment processor. Without ex ante allocation, every participant is exposed on every available theory and the third-party complaints multiply for years.
None of these reduces to “two zombies talking to each other, disconnected from anyone’s mental states.” All of them have human principals, deployer organizations, and counterparties with real reliance interests. But, equally, none of them satisfies Toomey’s clean condition that “two entities with mental states actually agreed on something” — at least not in any sense the deployer would recognize, since the deployer didn’t know what was being agreed to and may have actively avoided knowing.
This produces a bit of a gap in Toomey’s framework (and I think most contemporary contract thought). The law we have probably does largely distinguish between (a) zombie agreements between unanchored LLMs, which shouldn’t be enforced, and (b) human-to-human agreements that incorporate algorithmic outputs by reference, which should. But the likely disputes live in a third category this framework doesn’t have a great account for: human-deployed agents transacting with humans (or with other deployed agents), where the principals authorized the agent but not the transaction, and where the doctrines that fill the gap — apparent authority, ratification, conscious ignorance — were written for human salesclerks and their half-dozing supervisors, not LLMs.
Third, the idea that there’s no normative reason to enforce agentic contracts proves too much. In the real cases there are obvious candidate reasons: third-party reliance (the supplier who shipped the widgets), administrative cost (forcing every agent-mediated transaction into bespoke “real agreement” litigation will be ugly for everyone), and — crucially — the prevention of strategic behavior by the very humans we want to protect. If “my agent did it, not me” becomes a clean defense, every deployer will deploy more aggressively and disclaim more comprehensively. We’ve already seen this movie with AI terms of use: providers disclaim all warranty, all reliance, all output accuracy. A framework that denied normative legitimacy to agentic agency might let deployers disclaim assent itself.3
All that said, James’ paper is good and short and you should read it!
“If you build it, he will come”, from Field of Dreams (1987) is now as old, relative to us, as Lucy and Zehmer's drunken farm sale was to the release of Field of Dreams. Just in case you wanted to feel bad about yourself.
I teach the case as subject to criticism on the ground that Lucy didn’t believe that the parties had entered into a real deal either, because (1) he suspiciously brought a witness and liquor to the transaction; (2) he scrambled to formalize it the next day; and (3) the offer of $5 to seal the deal suggests doubt as to whether it was meant to be real. I realize these arguments are all flippable.
The early ango-american case law isn’t waiting for the philosophy to settle. Last year a Canadian tribunal held Air Canada liable for a refund its customer-service chatbot promised, even though the chatbot’s promise contradicted the airline’s published policy. Air Canada argued the chatbot was “a separate legal entity that is responsible for its own actions” — exactly the move Toomey’s framework makes available. The tribunal rejected it in one sentence: the chatbot is part of Air Canada’s website; Air Canada is responsible for what’s on its website.


