Skip to content

How Lens Agents Works#

Lens Agents sits between AI agents and enterprise systems. It provides tools and connectivity — governed by identity, policies, privacy controls, and audit — so agents can do real work without ungoverned access.


Architecture#

Lens Agents sits between your organization's AI-tool surface and the enterprise systems agents need to reach. The platform provides a single governance plane — identity, policies, privacy, audit, spending, and tools — that every agent passes through, regardless of which agent, which model, or which environment.

Lens Agents architecture: Your Organization (desktop AI apps, collaboration, management) connects through Lens Agents (identity, policies, privacy, audit, spending, tools) to Connections (any system) and AI/LLM Providers (any model).

The difference between agent types is where the agent runs. The tools, connectivity, and governance are always Lens Agents. Model providers integrated by the platform today are Anthropic and AWS Bedrock; the LLM layer is provider-agnostic by design, and additional providers are added as customer needs drive them. See Supported Models.


Request flow#

When an agent takes an action, every request passes through the same governance chain — regardless of agent type:

flowchart TD
    A["1. LLM decides to use a tool"]
    B["2. Agent client sends MCP tool call to Lens Agents"]
    C["3. Authentication — token or SSO identity validated"]
    D["4. Tool visibility — only IT-approved tools are shown"]
    E["5. Authorization — project access checked"]
    F["6. Sandbox — isolated execution with policy enforcement"]
    G["7. Proxy — outbound connections governed per policy"]
    H["8. Response returned to agent client"]
    I["9. Audit logged (async)"]

    A --> B --> C --> D --> E --> F --> G --> H --> I

For managed agents, LLM requests also route through the LLM proxy — adding budget checks, usage extraction, and spending enforcement.


Defense-in-depth#

Each layer adds protection. Even if one layer is bypassed, the others still enforce boundaries:

  1. Authentication — validates identity (SSO or agent token) on every request
  2. Visibility filtering — agents only see tools and systems IT has approved
  3. Authorization — checks team membership and project-level access grants
  4. Sandbox isolation — agent processes run in isolated environments with default-deny networking
  5. Credential injection — credentials are injected server-side via proxy. The agent never sees raw secrets
  6. Proxy enforcement — outbound traffic is governed per policy (domain allowlists, HTTP method/path restrictions)
  7. Audit — every action is recorded across 7 interaction surfaces

Agent execution modes#

Where the agent process runs relative to the sandbox determines what Lens Agents can govern. Desktop AI tools and external agents run outside the sandbox and reach in via MCP; managed agents and the local CLI run inside, with the full agent process wrapped.

See Agent execution modes for the governance comparison.


Key components#

Sandbox#

Isolated execution environment: privilege dropping, kernel-level network isolation, proxy-mediated egress, and a privilege-dropped workspace. Ships in two deployment shapes — a shell sandbox for tool execution (Mode 1) and an agent sandbox packaged alongside the agent in a custom image (Mode 2) — sharing the same security harness. Platform-managed sandboxes also shut down on idle to bound the exposure window. Runtime-agnostic: works in containers, MicroVMs, or bare metal.

Learn more about sandbox isolation →

Policy engine#

Defines what each agent can do before it does it. Domain allowlists with HTTP method/path restrictions, credential bindings, and integration controls.

Learn more about policies →

Cluster connectivity#

Kubernetes clusters connect via tunnel mode (outbound WebSocket — no inbound ports, no VPN) or direct relay mode (public HTTPS endpoint). Agents access clusters with their own identity via Kubernetes API impersonation.

Learn more about Kubernetes connectivity →

LLM proxy#

Model requests from managed agents route through the LLM proxy for usage extraction, spending enforcement, and audit. Budget checks happen before forwarding to the model provider. Integrated with Anthropic and AWS Bedrock today; additional providers are added in response to customer need.

MCP server registry#

Register upstream MCP servers to extend agent capabilities beyond native integrations. Auto-discover tools, govern access through the policy engine.

Learn more about the MCP registry →