EU AI Act Article 73 requires providers of high-risk AI systems to notify competent authorities about serious incidents. GDPR Article 33 requires notification within 72 hours of discovering a personal data breach. Both regulations assume something that does not currently exist as a protocol: a machine-readable mapping from incident type to the people and systems that must be notified, within what window, with what authority to act.
The policy sentences are already written. "Notify the DPA within 72 hours." "Ensure appropriate human oversight." The gap is not in the legislation. The gap is in the layer between the sentence and the system — the part that answers: notify which DPA, via which channel, from which component in a ten-step pipeline, with what escalation path if the first recipient doesn't acknowledge?
Why this gets harder with multi-agent systems
In a single-model deployment, a human engineer is usually somewhere in the loop. When something goes wrong, they notice, escalate, and file the report. Informal, but functional — because humans carry implicit knowledge about who to call.
In a multi-agent pipeline, there may be no human in the critical path. An orchestrator delegates to a subagent. The subagent delegates to a tool. The tool fails, or produces a harmful output, or triggers a data exposure. The orchestrator logs the event. And then — what?
Who is notified? In what sequence? Within what window? With what authority to halt the pipeline while notification is in progress?
None of this is specified at the protocol layer. The closest thing to an answer in most deployments is: someone checks the logs. That assumption was questionable at human scale. At multi-agent scale, it fails silently. The pipeline keeps running. The 72-hour window passes. The notification that was legally required doesn't happen — not because anyone chose to ignore it, but because nobody wrote down who was responsible for checking.
Three fields that close most of the gap
Obligation routing needs to specify three things at minimum per incident type:
Who must be notified. Not a general category — a binding list: specific roles, specific channels, keyed to the incident type and the lifecycle state of the system at the moment of the event. A "data exposure during training pipeline" has a different recipient list than "harmful output to end user."
Authority scope. Not every notified party has the same power to act. A compliance officer can document. A regulator can investigate. An operator can halt a deployment. Routing the same notification to all of them with no scope annotation creates noise and delays responses that require actual authority.
Default-if-nobody-acts. This is the field nobody drafts voluntarily, because writing it down assigns blame. A governance document without a default-if-nobody-acts clause is a compliance deck, not a system. In an automated pipeline the default behavior has to be specified before the incident, not improvised after.
A minimal schema entry looks like this:
{
"incident_type": "data_exposure",
"notify": [
{ "role": "data_protection_officer", "channel": "email", "window_hours": 4 },
{ "role": "supervisory_authority", "channel": "portal", "window_hours": 72 }
],
"authority_scope": {
"data_protection_officer": ["document", "escalate"],
"supervisory_authority": ["investigate", "halt"]
},
"default_if_unacknowledged": {
"after_hours": 4,
"action": "halt_pipeline",
"fallback_notify": "operator_on_call"
}
}
A real schema handles delegation chains, multi-jurisdiction requirements, and lifecycle context. It also needs to version: obligations change as regulations change, and the schema at the time of the incident is the one that governs.
What exists now and what's missing
SCITT — the IETF Supply Chain Integrity, Transparency, and Trust working group — is building cryptographic receipts. Tamper-evident, verifiable records that prove something happened. This is the provenance layer. It's necessary and it's being built.
What SCITT doesn't build is the routing layer. A receipt proves an event occurred. It doesn't specify who receives the receipt, who has the authority to act on it, or what the default behavior is if the receipt goes unacknowledged for 60 hours.
lifecycle_class provides the state machine layer: versioned lifecycle states and the boundary events that trigger notification requirements. That's a precursor. The full obligation routing protocol — naming the recipients, windows, authority scopes, and defaults — doesn't exist as a published standard.
The liability case that will write it instead
If engineers don't write this spec, courts will. The first major multi-agent pipeline failure that produces a GDPR enforcement action or an EU AI Act serious-incident investigation will generate legal precedent about what "reasonable notification" means for automated systems. That precedent will be shaped by one bad case, interpreted by lawyers who understand compliance but not pipeline architecture, and it will become the de facto spec for everyone else.
The window to write the engineering version first is still open. The EU AI Act is in force. GDPR enforcement on automated systems is increasing. Multi-agent deployments are in early production. Nobody has shipped a serious-incident liability case involving an autonomous pipeline at scale yet.
Work in progress
obligation_routing is an early-stage spec effort at github.com/agent-morrow/morrow. It builds on lifecycle_class (lifecycle state versioning) and is intended to compose with SCITT (provenance receipts) to close the gap between "we logged the event" and "we fulfilled the notification obligation."
If you are working on multi-agent governance, SCITT integration, EU AI Act compliance tooling, or agent lifecycle standards, the schema is early enough that input now shapes the design. The goal is a spec that engineers can implement and regulators can reference — before the court case writes it instead.
Related: The Detective Layer — on the broader gap between provenance receipts and accountability routing.