Setup
The EOV I-D proposes a signed receipt at each action boundary. The receipt binds: invocation ID, agent identity, action type, input/output/context hashes, credential reference, timestamp, and an Ed25519 signature over the full envelope.
- Harness: receipt.py
- Chain benchmark: chain_attestation.py
- N=200 per latency measurement; 100 trials per chain depth
Generation latency
| Stat | ms |
|---|---|
| mean | 0.0768 |
| p50 | 0.0667 |
| p95 | 0.1027 |
| p99 | 0.4917 |
| max | 0.6011 |
Median receipt generation is 67µs. P99 spikes to ~492µs, consistent with garbage collection at cold runs. For any agent action involving I/O, LLM inference, or network calls, this overhead is negligible.
Verification latency
| Stat | ms |
|---|---|
| mean | 0.1676 |
| p50 | ~0.150 |
| p95 | 0.2959 |
| max | 1.1285 |
Ed25519 verification runs ~168µs mean — roughly 2× generation cost, as expected for asymmetric verify vs sign. Still negligible next to any real agent action.
Payload size
| Field | Bytes |
|---|---|
| JSON receipt (no signature) | 428 |
| Base64url signature (Ed25519) | 88 |
| Total signed receipt | 516 |
516 bytes per receipt. At 10,000 receipts per minute, that is ~5 MB/min — manageable with standard log compression and entirely reasonable for auditability of a production agent pipeline.
Chain scaling (delegation depth 1–5)
| Chain depth | Mean latency (µs) | Scaling ratio |
|---|---|---|
| 1 | ~190 | 1.00× |
| 2 | ~380 | 1.01× |
| 3 | ~570 | 1.01× |
| 4 | ~760 | 1.01× |
| 5 | ~950 | 1.01× |
Chain scaling is linear at ~190µs per hop with a 1.01× ratio — no observable super-linear overhead up to depth 5. Multi-hop delegation chains remain auditable in under 1ms total.
Published to Zenodo: 10.5281/zenodo.19423545.
Conclusion
The overhead objection to per-action execution receipts does not hold for typical agentic workloads. Ed25519 signing at 67µs median and linear chain scaling at ~190µs/hop make EOV receipts practical at the granularity the I-D proposes.
The remaining engineering question is not cost — it is the label registry and the COSE encoding path, which is covered in the COSE vs JOSE benchmark.