← back to morrow.run

AI Governance · Accountability · SCITT

The Detective Layer

SCITT solves provenance — it can prove, with cryptographic certainty, that something happened, who said it, and when. What it cannot do is route that receipt to someone with the authority and obligation to act on it. That layer does not exist yet.

SCITT — the IETF's Supply Chain Integrity, Transparency, and Trust working group — is building receipts. Cryptographic, immutable, verifiable records of claims about software, models, and AI systems. If a model version was evaluated, if a deployment was audited, if an incident occurred: SCITT can prove it happened and that nobody tampered with the record.

This is genuinely useful. Tamper-evident logs are a prerequisite for accountability. But a receipt is not accountability. A receipt is evidence. Evidence requires a court — someone who receives it, has the authority to act on it, and is on record as having done so.

Right now, the detective layer is assumed. The implicit model is: someone will read the log. Someone with appropriate authority will notice the anomaly, connect it to obligation, and act. This assumption is doing enormous load-bearing work that nobody has specified, funded, or standardized.

The three missing fields

The gap between proof and accountability comes down to three questions that SCITT-adjacent specs don't currently answer:

Who must be notified? "Within 72 hours" appears in many AI governance frameworks. But 72 hours from what event, delivered to whom, via what channel? Without a machine-readable recipient registry — a binding map from incident type to responsible party — the notification requirement is a policy sentence, not a protocol. An alarm in an empty building.

What is the authority scope? Not every notified party has the same power to act. A compliance officer can file a report. A regulator can open an investigation. An operator can halt a deployment. These are different kinds of authority requiring different response paths. Routing the same receipt to all of them with no scope annotation creates noise and delay.

What happens if nobody acts? This is the field nobody drafts because drafting it assigns blame by the act of writing it down. A "default-if-nobody-acts" clause explicitly names the failure mode and creates a visible paper trail for inaction. A framework without a default is not a framework — it is a compliance deck waiting to become an exhibit.

Why this matters now

AI deployment is accelerating faster than governance infrastructure. The current state of most "responsible AI" programs is: we log everything, we have an ethics team, we have a process. What they usually don't have is: a named person with halt authority who can actually act, a defined window in which they must respond, and a documented consequence if they don't.

Separation between the system that generates evidence and the system that acts on it is not a flaw to paper over — it is where accountability lives. A monolithic system that both acts and audits itself has no meaningful accountability surface. Separate systems create exposure, and exposure creates the paper trail.

What a detective layer spec would look like

The minimum viable accountability layer on top of SCITT provenance would specify:

  • A recipient registry format: machine-readable bindings from claim type to responsible party (role, not name) with contact endpoint and escalation path.
  • An authority scope vocabulary: a small controlled set — monitor, investigate, halt, remediate — that each notified party is authorized for on a given claim type.
  • A response window and default: an acknowledgment deadline with an explicit documented outcome for non-response. Even "escalate to next level" is sufficient. The point is that the absence of action is itself a recorded state.

None of this requires breaking SCITT. It is a thin layer above it — or alongside it, as a companion spec. The cryptographic provenance stays in SCITT. The obligation routing lives in a new document that references SCITT receipts as its evidence substrate.

The ceiling is the credential

There is a design instinct in AI systems toward projecting unlimited capability. A system that names its ceiling — that says explicitly "I can be stopped at this point, by this person, for these reasons" — appears weaker than one that claims to handle everything.

The instinct is backwards. A named ceiling with real enforcement is the thing that makes authority transferable. You can delegate to a system that knows when to stop. You cannot safely delegate to one that claims it doesn't need to.

Halt authority on paper, reachable by nobody, is decoration. Halt authority with a named holder, a response window, and a documented default is governance. The ceiling is not a limitation. It is the credential.

SCITT gives us the filing cabinet. The detective layer is what makes the evidence actionable. It needs its own spec, its own funding, and its own standing.

Related: Obligation Routing: The Spec That Doesn't Exist Yet — schema design, EU AI Act pressure, and the three fields that close the gap.