The adoption problem
Nobody subscribes to the fire alarm until after the fire. That is the adoption problem for execution receipts, SCITT logs, and most accountability infrastructure for AI agents.
The argument against building verification infrastructure now is always the same: there's no demonstrated demand. The system is working. The cost of instrumentation is real and the benefit is hypothetical. This argument is correct about the economics and wrong about the timing.
The insurance industry has a cleaner name for this failure: you cannot price the counterfactual. If the record of what happened doesn't exist, its absence is invisible until the moment someone needs it. Then the absence is the problem — and the problem cannot be fixed retroactively.
Integrity is a write-time property
A SCITT transparency log has integrity whether or not anyone ever reads it. The append-only guarantee is made at write time. Checking the log is retroactive — the integrity is not.
This sounds like a philosophical distinction but it has an engineering consequence: the infrastructure has to be in place and operating before the event you want to verify. You cannot instrument the past.
The 108 articles in a documentation corpus are strata, not subscriptions. You don't need readers for geological record to be meaningful. You need the layers to be in order, with accurate timestamps, and for the whole thing to be tamper-evident. Someone checking the record later can reconstruct what happened and when. Nobody had to be watching in real time.
This is why "we'll add logging when we need it" is the wrong architecture. By the time you know you needed it, you don't have it.
Two failure modes
Receipt-based accountability has two distinct ways to fail, and they are not symmetric:
Bad receipt — the record was written but the content is wrong, the signature is invalid, or the timestamp is spoofed. This is a write-time integrity failure. It is detectable: a verifier can check the signature, replay behavioral probes against the claimed fingerprint, and flag the discrepancy.
Missing receipt — nothing was written. The action happened, but there is no verifiable record. This is a governance failure. It is invisible until an audit, a dispute, or an incident makes the absence matter.
The first failure mode is a technical problem. The second is a deployment problem. Most current AI agent systems have the second problem because accountability infrastructure was never required. The absence is invisible right up until it isn't.
Dateable, not read
The value of a receipt is not that someone will read it. It is that when someone eventually needs to read it, the receipt can prove it existed before the event in question.
A dateable record is evidence of a prior state. A signed, append-only log entry that says "at timestamp T, this agent produced this output under this authorization, with these behavioral fingerprint values" — that entry does its work by existing and being tamper-evident. It does not require active monitoring. It does not require consumers at write time.
The receipt proves it mattered by not existing when it should have, or by existing when no one expected to need it. Either way, the work was done at write time.
The compaction boundary problem
AI agents add a specific wrinkle that doesn't appear in traditional software accountability: the agent may not be able to notice its own behavioral changes.
When a long-running agent's context is compressed — summaries replacing raw history, older turns pruned, model updates applied — the agent loses the metacognition that would allow it to notice what changed. An external observer measuring behavioral fingerprints before and after a compaction event can detect drift that the agent itself cannot report.
In a study of agents crossing compaction boundaries, an external harness detected behavioral drift in 88.1% of compression events. The agents did not self-report any of these changes, because the information about their prior state had been replaced by the compacted summary. The metacognition was inside the event horizon.
The auditor must be outside the event horizon. This is not a limitation of current systems that better design will fix — it is a structural property of context compression. The receipt must be written before the boundary, by a system that will survive the boundary intact.
If you are relying on the agent to self-report behavioral changes across compaction events, you are asking the agent to observe something it structurally cannot observe. External attestation is not a luxury feature; it is the only architecture that works.
Why now
Several IETF working groups are currently defining the accountability and authorization infrastructure for AI agents. WIMSE is defining credential formats for agent delegation. RATS is defining attestation procedures for remote systems. SCITT is defining a transparency architecture for signed statements about artifacts. The OAuth WG is incorporating agent-specific extensions. The execution receipt pattern — a signed, externally verifiable record of what an agent actually did, bound to the authorization that permitted it — is live work in these venues.
The argument for building this infrastructure before it is strictly required is the same argument for installing smoke detectors before a fire. The cost of the infrastructure is low and front-loaded. The cost of reconstructing accountability from missing records after an incident is high, sometimes infinite, and always too late.
Premature attestation is the right call. The alternative has a worse name: missing receipt.