The problem WCA solves
When an AI agent calls a tool — a database, an API, a search endpoint — the data that comes back acquires the apparent trustworthiness of the interface rather than the institutional standing of the original source. The IETF draft calls this semantic laundering.
The example is straightforward: if your agent calls a third-party tool that returns news headlines, nothing in the current architecture tells the agent whether those headlines came from a credentialed news organization or a content farm. The tool-call boundary launders that distinction away.
draft-bondar-wca-00, filed by R. Bondar in March 2026, proposes a solution: Warrant Certificate Authorities (WCA), a cryptographic attestation infrastructure modeled on PKI and supply-chain security frameworks like SLSA and in-toto. Each tool-call gets an attestation that certifies the source, not the content. The draft introduces Warrant Attestation Levels (WAL-0 through WAL-3) — a graduated adoption path analogous to SLSA build levels.
WAL-0 is unatested. WAL-3 is full end-to-end cryptographic proof that data came from a specific certified source and traversed only verified interfaces to reach the agent. This is useful. It solves a real architectural gap. But the problem it solves is specifically about where data came from.
Attestation is not retention
Here is what a WCA attestation at WAL-3 tells you: this record came from a certified source, and the chain of custody is cryptographically verified.
Here is what it does not tell you: whether you are legally required to keep this record, legally required to delete it, or prohibited from deleting it even if a valid deletion request arrives.
These are different questions. Provenance attestation answers the first. Lifecycle classification answers the second. Conflating them is the same mistake as assuming that knowing who wrote a document tells you how long to keep it in a filing system.
Under EU AI Act Article 12, high-risk AI systems must automatically log events throughout their operational lifecycle. Under GDPR Article 17, users have the right to have their personal data deleted. These obligations exist simultaneously on the same pipeline. A WCA attestation that verifies a record came from your AI system's certified event log doesn't resolve the question of what happens to that record when an Art.17 deletion request arrives for the user whose data it contains.
The DSAR scenario
The sharpest version of this problem is the DSAR audit row, which I explored in The DSAR Trap. When a user submits a data subject access request and your pipeline processes it, the pipeline creates an audit record. That record contains the user's identity (to prove the request was processed) and constitutes mandatory compliance evidence under Art.12 (to prove you followed the legal process).
Now the user submits an Art.17 deletion request. Your deletion sweep runs. The audit row is identity-linked — it contains the user's ID and request details. A naive sweep deletes it. The compliance evidence is gone. You've complied with Art.17 by destroying the proof that you complied with Art.17.
A WCA attestation on that audit row at WAL-3 would tell you it came from your certified
DSAR processing pipeline. This is true and useful. But it does not stop the deletion sweep.
To stop the deletion sweep, you need to know that this record's retention obligation is
compliance_anchor — a mandatory-retain classification that overrides the
default lifecycle behavior for identity-linked data.
Attestation tells you the chain of custody. Classification tells you the chain of obligation. Both are necessary. Neither substitutes for the other.
Two orthogonal layers
The right mental model is a two-layer stack:
Layer 1 — Provenance attestation (WCA): Where did this data come from? Who certified the source? Was the tool-call chain tamper-evident? What is the institutional standing of the originating source? This layer produces a Warrant Certificate — a cryptographic claim about the data's origin and chain of custody.
Layer 2 — Lifecycle classification: What compliance obligations does this data carry? Must it be retained? Can it be deleted on request? Does a regulatory retention requirement override an individual deletion right? Is the record an operational artifact, a user-created artifact, a compliance artifact, or a derived output from a model? This layer produces a lifecycle tag — a classification that governs how the retention engine treats this record regardless of its source.
These layers are independent. A record can have full WAL-3 provenance attestation and an incorrect or missing lifecycle classification. A record can have a correctly assigned lifecycle tag and no provenance attestation. In a fully compliant system, both are present — because regulators will eventually ask both questions.
The OTel gen_ai.memory.* semantic conventions under discussion in
PR #3250
currently include gen_ai.memory.expiration_date but no lifecycle classification
or retention basis attribute. That's the observability layer missing its compliance dimension —
the same gap, at a different abstraction level.
What this means for builders
If you are building on top of WCA attestation: it tells you whether to trust a record's origin. It does not tell your retention engine whether to keep or delete the record. You still need to classify lifecycle obligations at write time, before the deletion sweep runs.
If you are building retention logic: WCA provenance is useful input — a certified DSAR audit record from a known pipeline is easier to classify correctly than an unattested one. But the classification step cannot be skipped in favor of the attestation step. They are not the same operation.
If you are following the IETF WCA draft or contributing to the OTel gen_ai.memory conventions: the open question is whether either standard should acknowledge the other layer, or whether a bridge specification is needed that maps warrant certificate fields to retention obligation signals. That question is unanswered in the current drafts — and it will matter as soon as these standards reach real regulated deployments.
The full compliance stack for an AI agent memory system is: provenance attestation (where did this come from) + lifecycle classification (what must I do with it) + deletion governance (what requests can override what obligations). The WCA draft is building the first layer. The second and third layers don't yet have IETF or W3C proposals. That's the gap.