Skip to content

Security Whitepaper#

Every claim in this document is verified against the platform source code. For the product overview, see How Lens Agents Works.


Executive summary#

Lens Agents is a governed platform for running AI agents on enterprise systems. This whitepaper describes the security architecture, isolation model, credential handling, data protection, audit capabilities, and compliance posture.

Key security properties:

  • Clear identity model — desktop tools operate under the user's SSO identity (OIDC), external and managed agents get dedicated tokens. Every action attributed in the audit trail
  • Agents execute in isolated environments with default-deny networking
  • Credentials are injected server-side via proxy — never present in the agent's environment
  • Every action is audited across 7 interaction surfaces
  • Spending limits actively enforce budgets at execution time
  • Autonomy levels control agent decision-making, bounded by hard policy enforcement
  • Subagents are isolated from their parent — own token, workspace, and audit trail
  • Same security model whether agents run in the cloud or locally
  • Operates under Mirantis's SOC 2 Type 1 (Lens K8S IDE) and ISO 27001 control framework; Lens Agents-specific attestation scope is shared under NDA during evaluation

Threat model#

What we protect against#

Threat How Lens Agents addresses it
Agent accesses unauthorized systems Default-deny networking. Domain allowlisting with HTTP method/path restrictions. Only explicitly approved systems are reachable.
Agent exfiltrates credentials Server-side credential injection via proxy. Credentials never exist in the agent's environment. Even a compromised agent cannot extract them.
Agent exfiltrates sensitive data PII detection and masking on the LLM proxy filters sensitive data before it reaches the model provider. Per-policy configuration, fail-closed by default. Available to select customers.
Compromised agent escapes sandbox Sandbox isolation with privilege dropping, kernel-level network restrictions, and proxy-mediated egress.
Credential theft from storage Agent tokens stored as SHA-256 hashes. API credentials encrypted with AES-256-GCM.
Cross-tenant data leakage Org-scoped data with authorization checks on every request. Per-agent workspace isolation.
Runaway AI costs Spending limits at org, team, and agent level. Active enforcement.
Unattributed agent actions Every action audited with actor identity, action, resource, timestamp, and result.
Shadow AI / ungoverned agents Centralized tool registry — IT controls which tools and systems agents can access.
Agent escalates its own autonomy Autonomy levels are behavioral controls. Hard security boundaries (sandbox, policy, credentials) enforce access regardless.
Subagent escapes parent boundaries Flat hierarchy (no recursive delegation). Each subagent has its own token, workspace, and policy scope.
Agent modifies its own safety rules Agent Guide (safety rules) is editable only by users — agents cannot modify it.

What we're honest about#

Limitation Detail Mitigation
Default runtime is containers The sandbox uses Linux kernel primitives and is runtime-agnostic. Default deployment uses containers, which share the host kernel. Deploy inside a MicroVM for VM-level isolation. The sandbox binary works identically regardless of runtime.
No seccomp profile Sandbox does not apply syscall filtering. Relies on privilege dropping + kernel-level network isolation. Agents run as unprivileged users with supplemental groups dropped.
Asynchronous audit writes Audit entries may be delayed during database outages. Monitor audit trail completeness. Configure database high availability.
Token revocation doesn't kill active sessions Revoking a token prevents new authentications. Existing connections continue until timeout. Sandbox idle timeout (30 min default) limits the window.
Application-level tenant isolation Multi-tenant isolation uses org-scoped foreign keys and application-level authorization, not database row-level security. Every request is authorized against the authenticated identity's org membership. Data is org-scoped by foreign key constraints.
Autonomy enforcement is prompt-layer Autonomy levels (1–5) are enforced via LLM system prompt. Prompt injection can bypass autonomy constraints. The hard security boundary is the policy engine, sandbox, and credential scope — enforced at the platform level regardless of LLM behavior.

Trust model and execution modes#

See Agent Execution Modes for the full trust model comparison between Mode 1 (agent outside the sandbox) and Mode 2 (agent inside the sandbox).

Security comparison:

Mode 1 (agent outside sandbox) Mode 2 (agent inside sandbox)
Agent execution Outside Lens Agents Inside sandbox
Tool execution Inside sandbox Inside sandbox
Network isolation Tool calls only Entire agent
Credential isolation Tool calls only Entire agent
Trust model Trust the agent, govern its tools Don't trust the agent, govern everything
Audit coverage Tool calls and system access All agent activity

Sandbox isolation#

Kernel-level isolation layers. Runtime-agnostic — works in Docker, MicroVMs, or bare metal.

See Sandbox Isolation for details:

  1. User privilege droppingsetuid/setgid to unprivileged user, supplemental groups dropped
  2. Kernel-level network isolation — iptables rules, IPv4 + IPv6, sandbox user can only reach localhost
  3. Proxy-mediated egress — domain matching, TLS interception, credential injection, default-deny
  4. Privilege-dropped workspace — unprivileged filesystem view; persistence is a deployment decision (ephemeral by default for platform-provisioned sandboxes, operator-controlled for embedded agent sandboxes)

Separate from the isolation layers, platform-managed sandboxes shut down after 30 minutes of inactivity (configurable) to bound the exposure window. Self-hosted agent sandboxes set their own lifecycle.


Credential and data protection#

Credential isolation#

Credentials never exist in the agent's environment. They are resolved at the platform level and injected via the proxy using TLS interception:

  1. Agent's outbound HTTPS request is intercepted by the proxy
  2. Proxy matches destination against policy rules
  3. For governed domains: proxy generates an ephemeral TLS certificate and terminates the agent's TLS
  4. Proxy injects credential headers server-side
  5. Proxy establishes a new TLS connection to the actual destination
  6. Request forwarded with credentials — agent never sees them

Ephemeral CA: ECDSA P-256, generated on sandbox startup, exists only in process memory, dies with container. Never written to disk.

Kubernetes credentials#

Short-lived JWT tokens (60-second lifetime, auto-rotated). K8s API relay uses impersonation headers — cluster sees the agent as its own principal.

AWS credentials#

STS AssumeRole for temporary session credentials. Session tags flow to CloudTrail for attribution.

Storage encryption#

  • API credentials: AES-256-GCM (authenticated encryption with random IV)
  • Agent tokens: SHA-256 hashes (one-way, original never stored)

PII and sensitive data controls#

The LLM proxy detects and masks sensitive data before it reaches the model provider. Detection combines pattern-based matching (emails, phone numbers, IPs, dates of birth, credit cards, IBANs, national IDs, API keys, JWTs, connection strings, AWS credentials, other secrets) with an optional ML-based named-entity recognition model (PERSON, ORG, LOCATION). Masking is per-policy, fail-closed by default — if masking cannot be applied, the request is blocked. Response unmasking restores original values in the agent's view when configured.

Masking statistics are recorded in the audit trail. PII values are never persisted to disk — they live only in memory for the duration of the request.

Availability: PII controls are available to select customers. Ask during evaluation for enablement details.

Scope: data that agents retrieve from enterprise systems flows to the LLM as context. PII controls address this flow at the proxy layer, before data reaches the model provider.


Managed agent security#

Autonomy levels#

See Autonomy Levels. Key security property: autonomy levels are a behavioral control layer (LLM prompt). The hard security boundary is the policy engine, sandbox, and credential scope — enforced regardless of autonomy level.

Workspace file controls#

See Workspace Files. Key security property: the Agent Guide (safety rules, operating boundaries) is editable only by users through the UI — agents cannot modify their own operating manual.

Subagent isolation#

Flat hierarchy (no recursive delegation). Each subagent has its own token, workspace, memory, and audit trail. Parent-child relationships are tracked. Subagent actions are independently auditable.


Multi-tenant isolation#

Schema-level: every org-scoped table has an org_id foreign key. Database-level constraint, cannot be bypassed by application code.

Application-level: every API call checks org membership, team membership, and project access. No endpoint returns cross-org data.

Per-agent: each agent workspace scoped to its creator. Memory scoped to agent. Conversations scoped to agent + user.


Observability and Enforcement#

See Audit Trail for the complete audit model. See Spending Controls for budget enforcement.

Audit reliability: asynchronous, non-blocking writes. Availability over consistency — agent operations never blocked by audit failures.

Spending enforcement: budget checked before every LLM request. Exceeded budgets return 429 with budget-exceeded error. Fail-open if spending service unavailable.


Infrastructure security#

Cluster connectivity#

  • Tunnel mode: relay initiates outbound WebSocket. No inbound ports, no VPN.
  • Direct relay mode: public HTTPS endpoint. JWT validation + K8s API impersonation.
  • Keepalive: ping every 30 seconds, auto-reconnect with exponential backoff.

Deployment options#

  • SaaS (Lens-hosted) — managed infrastructure, operated under Mirantis's SOC 2 Type 1 (Lens K8S IDE) and ISO 27001 control framework
  • Self-hosted — full control over infrastructure and data residency
  • Cloud marketplace — AWS Marketplace and Azure Marketplace

All options provide the same platform capabilities, governance, and audit trail.


Compliance#

See Compliance for certifications, security practices, EU AI Act readiness, and data sovereignty.


Security contact#

To report a security concern or vulnerability, email security@lenshq.io. Our security team will follow up directly.