Skip to content

Audit Trail#

Every agent action is tracked in the audit trail — what was accessed, who did it, what the result was, and when. The audit trail covers all agent types and all interaction surfaces.


What's Logged#

The audit trail records actions across 7 interaction surfaces:

Surface What's logged
MCP tool calls Tool name, parameters, result, actor, timestamp
Kubernetes proxy K8s API path, method, actor identity, cluster
Shell commands Command executed, stdout/stderr, exit code
Forward proxy Destination domain, HTTP method, path, status, bytes transferred
Sandbox operations Container lifecycle, file operations
AWS proxy AWS service, API call, session tags
LLM proxy Model, tokens (input/output/cache), cost, prompt metadata

Audit Event Structure#

Each audit event contains:

Field Description
Event time When the action occurred
Duration How long the action took
Actor type User, agent, or system
Actor ID / name Who performed the action
Source Which subsystem (mcp-tool, k8s-proxy, forward-proxy, etc.)
Action What was done (e.g., shell_exec, CONNECT api.github.com:443)
Resource type / ID What was affected
Result Success, failure, or error
Status code HTTP status code (where applicable)
Description / metadata Additional context (JSON — includes LLM usage data for proxy calls)
Org / project Organizational scope

Querying the Audit Trail#

Filters#

The audit trail is queryable by any combination of:

  • Agent — actions for a specific agent
  • Time range — last hour, last day, custom range
  • Action type — tool calls, API requests, shell commands
  • Resource — specific systems or resources
  • Result — success, failure, error
  • Project — filter by project scope

Aggregations#

The audit trail supports aggregations with breakdowns by:

  • Source (which subsystem)
  • Actor type (user, agent, system)
  • Result (success, failure)
  • Action type
  • Resource type

Time-series views with configurable intervals (hour, day, week) show activity patterns over time.

API access#

Audit data is queryable via API for integration with external SIEM and compliance tooling. Cursor-based pagination (max 200 entries per page).


Reliability model#

Audit writes are asynchronous and non-blocking. Each event is written to the database independently. If the write fails, the failure is logged but the agent operation is not blocked.

This is a deliberate design choice: availability over consistency for agent operations. Agents are never slowed down or stopped by audit write failures.

For environments requiring high audit reliability, configure database high availability and monitor audit trail completeness.

Operational recommendations#

For compliance-critical deployments:

  1. Export audit data regularly. Use the audit API to export events to your SIEM or log aggregation system on a schedule. Don't rely solely on the platform's database for long-term retention.
  2. Monitor for gaps. Compare expected events (e.g., one heartbeat audit entry per interval per agent) against actual entries. A gap indicates potential data loss during a database outage.
  3. Configure database high availability for self-hosted deployments. The audit trail's reliability is directly tied to database availability.
  4. Plan retention. Lens Agents does not enforce a retention period on audit data. Define your own retention policy based on regulatory requirements (SOC 2, GDPR, EU AI Act) and export accordingly.

Use cases#

Security review: filter by agent, time range, and action type to see exactly what an agent accessed and when.

Incident investigation: search for actions against a specific resource during an incident window.

Compliance: export audit data for SOC 2, ISO 27001, or EU AI Act compliance reviews.

Cost attribution: LLM proxy audit entries include token counts and cost — drill down to see exactly what each agent spent.