Skip to content

Data Sovereignty#

Lens Agents gives you control over where your data resides. Your deployment choice determines data residency, and no data leaves your environment unless you explicitly configure it to.


Deployment controls data residency#

Deployment model Data location Who controls it
SaaS (Lens-hosted) Lens-hosted infrastructure Lens manages; you select from available regions
Self-hosted Your infrastructure -- any cloud, region, or on-premises You control completely
Cloud marketplace Your AWS or Azure account, in the region you choose You control within your cloud tenant

See Deployment Options for details on each model.


What data stays where#

Platform data#

Agent configurations, policies, team structures, audit trail entries, and workspace files are stored in the platform database. For self-hosted and marketplace deployments, this database is in your infrastructure. For SaaS, it is in Lens-hosted infrastructure.

Agent-to-system traffic#

When agents connect to your Kubernetes clusters, AWS accounts, GitHub repositories, or any HTTP system, the traffic flows between the sandbox and your systems. The platform does not store the content of these connections. Audit metadata (destination, method, status code, timestamps) is recorded, but the response payloads are not persisted by the platform.

LLM traffic#

For managed agents, model requests route through the LLM proxy. The proxy extracts usage metadata (token counts, cost) for spending controls and audit. Prompt content is forwarded to the model provider and is not stored by the platform beyond the agent's own conversation history.

For desktop and external agents, the model traffic does not pass through Lens Agents at all. It goes directly between the agent and the model provider.

Credential storage#

API credentials are encrypted with AES-256-GCM and stored in the platform database. Agent tokens are stored as SHA-256 hashes. In self-hosted deployments, these are in your database. In SaaS, they are in Lens-hosted infrastructure.


No data sent to Lens by default#

For self-hosted and marketplace deployments, no data is sent to Lens unless you explicitly configure it. The platform operates entirely within your environment.

For SaaS deployments, the platform requires certain data to operate (agent configurations, audit events, conversation history). No data beyond what the platform needs is collected. There is no telemetry sent to Lens from agent-to-system connections.


GDPR compliance#

Lens Agents supports GDPR requirements across all deployment models:

GDPR requirement How Lens Agents supports it
Lawful basis Data processing limited to what is necessary for platform operation
Data Processing Agreement (DPA) Available on request for SaaS customers
Sub-processor list Maintained and available on request
Right to deletion Agent data can be deleted per agent or per user. Audit trail retained per regulatory requirements
Data portability Audit data exportable via API
Breach notification 72-hour notification per GDPR requirements
Data minimization Platform stores configuration and audit metadata; agent-to-system payloads are not persisted

Self-hosted advantage#

For organizations with strict GDPR requirements, self-hosted deployment provides full control:

  • All data processing happens on your infrastructure
  • No sub-processors involved (you manage the stack)
  • Data residency is entirely your decision
  • You control retention, deletion, and backup policies

Regional considerations#

Requirement How to meet it
EU data must stay in the EU Self-hosted in an EU region, or marketplace deployment in an EU AWS/Azure region
No data in US-controlled infrastructure Self-hosted on-premises or in a non-US cloud provider
Air-gapped environment Self-hosted with no external connectivity. Agents connect only to internal systems
Specific country requirements Self-hosted in the required jurisdiction

Model provider data flow#

When agents use LLM providers, prompt data flows to the provider. This is true regardless of deployment model, because the model intelligence runs externally.

Controls available:

  • AWS Bedrock: data stays within your AWS account and region. Model providers do not retain your data (per AWS Bedrock data protection policies).
  • Anthropic (direct): covered by Anthropic's enterprise data-processing terms; most enterprise agreements include opt-out from training.
  • Self-hosted or alternate providers: self-hosted endpoints (vLLM, Ollama, any OpenAI-compatible endpoint) and other alternate providers are configured per evaluation engagement. Ask during onboarding if that's a requirement.

For managed agents, the LLM proxy logs metadata (model, tokens, cost) but does not persist prompt content beyond the conversation history.