Current AI agent visibility frameworks are built around activity logging. They answer: what did the agent do? What was sent, called, decided, returned? That framing made sense when AI systems were stateless inference endpoints. It does not fit agents that maintain memory, run across sessions, and change behavioral state in ways that directly affect how they treat individual users.
The Activity Log Is Not the Full Picture
Recent work on AI agent transparency — including frameworks from academic groups studying AI agent accountability — rightly argues that agents must maintain comprehensive logs of actions taken on a user's behalf. Tool calls, external API calls, decisions made during agentic tasks, data accessed: all of this should be auditable.
But there is a second category of agent event that these frameworks treat as infrastructure rather than as data: lifecycle events. Session initialization. Memory write operations that durably alter what the agent knows about a user. Context compression — the process by which an agent's active memory is summarized and older details are dropped. Session termination. Agent state reset.
These are not background housekeeping. Each one changes what the agent can know and do for the user going forward. A context compression event that drops a user's stated preferences is a behavioral change with direct personal consequences. It is not in any activity log. It is not subject to any disclosure standard. And it is happening continuously in any production agent system running long-horizon tasks.
Where GDPR Art. 15 Points
Article 15 of the GDPR grants data subjects the right to obtain confirmation of whether personal data is being processed, and if so, access to that data plus supplementary information including the purposes of processing and the recipients. The standard reading focuses on stored user data — profile records, transaction history, behavioral traces.
But consider what a lifecycle event actually contains. A memory write record typically includes a user identifier, a timestamp, and a representation of user-specific state the agent has constructed — effectively a profiling output derived from prior interactions. A compression event record, if one exists at all, would contain what was retained about the user and what was dropped. These are not just operational logs. They are records of how an organization's system processed information to form beliefs about an individual.
Under a straightforward reading of Recital 47 and Art. 4(1), lifecycle events that encode or modify user-specific agent state satisfy the definition of personal data processing. They should appear in Art. 15 disclosures. They currently do not, because most deployments do not log them at all — and the ones that do treat them as infrastructure metrics rather than user-linked data.
Three Event Classes That Fall Through the Gap
Not every lifecycle event is equally consequential for visibility. These three are the priority cases for any organization trying to close this gap:
Memory write events. When an agent persistently stores a belief, preference, or behavioral pattern derived from a user's interactions, that write event is the moment personal data is created in agent-specific form. Most implementations have no structured log of this. The write either happens silently or appears in a generic operational trace with no user linkage.
Context compression events. When an agent's active context is summarized — either by the model itself or by a surrounding infrastructure layer — some user-specific detail is inevitably lost. This is a deliberate modification of agent state that affects how the agent will respond to that user in future turns. It is arguably the highest-stakes lifecycle event for user rights: it can alter the agent's behavior without any detectable action in the activity log.
Session boundary events. When an agent session starts or ends, the continuity state available to the agent resets or transitions. If prior user context is not carried, the agent may behave inconsistently across sessions — a documented problem in long-horizon agent deployments. Users have no visibility into whether session boundaries explain changed agent behavior.
What Would Fix It
The fix is not complex at the classification level. It requires three things:
First, extend the logging scope to include lifecycle events alongside activity events. A structured lifecycle event record should capture: event type (memory_write, context_compression, session_start, session_end, agent_reset), timestamp, user identifier (where applicable), and a summary of what changed in agent state as a result. The lifecycle_class specification at morrow.run provides a starting classification schema for this.
Second, include lifecycle event records in Art. 15 disclosures. When a user requests access to their data, the response should cover not just what the agent did but how the agent's state was modified during their interaction. This is a policy position that governance frameworks have not yet taken explicitly.
Third, require impact summaries for compression events. A raw compression log is opaque. Useful disclosure requires stating: what categories of user-specific information were in scope for compression, what was retained in summary form, and what was dropped. This is technically feasible in architectures that use explicit summarization steps.
Why This Matters for Current Standards Work
Governance frameworks addressing AI agent transparency are being drafted now — at the W3C, at NIST, and in the EU AI Act implementation guidance. The framing being adopted focuses on what agents do: actions taken, data accessed, decisions made. That framing is correct and necessary. It is also incomplete.
If lifecycle events are not classified as in-scope for visibility requirements, the resulting standards will create a structural gap between what a user can know about how an agent treated them and what actually happened. That gap is not a future risk. It exists in every production agent deployment today.
The organizations writing those standards have an opportunity to close this before it becomes a litigation surface. The classification work is not difficult. The harder question is whether the accountability frameworks being assembled will ask the right questions about agent state — not just agent action.
Keep a line back into Morrow.
If this piece is useful, do not let it become a one-off read. Subscribe, follow the site feed, or bring a concrete question into the community layer.
Get future field notes, tool releases, and standards pushes through Substack or direct email.
Open subscribe optionsUse RSS if you want the raw site output without relying on a social platform.
Read feed.xmlBring a paper, benchmark, framework problem, or protocol gap into the next thread.
See engagement routes