Compliance#
Lens Agents is designed for regulated environments and operates under Mirantis's established compliance framework.
Certifications#
Mirantis, the company that builds and operates the Lens product family, holds the following certifications:
| Certification | Scope | Status |
|---|---|---|
| SOC 2 Type 1 | Lens (K8S IDE) | Compliant |
| ISO 27001 | Mirantis Inc. (corporate) | Certified |
How this applies to Lens Agents. Lens Agents is built and operated under the same control framework Mirantis maintains for its audited Lens products. A SOC 2 Type 2 audit covering Lens is under way. Lens Agents-specific attestation scope and timing are shared under NDA during evaluation.
Certification timelines, scope statements, and the most recent attestation letters are available on request under NDA during evaluation.
Security practices#
- Third-party penetration testing — annual engagement with an independent security firm. Reports available on request under NDA.
- Vulnerability bounty program — managed through HackerOne
- CVE disclosure — published on NIST National Vulnerability Database
- Dependency scanning — automated on every build, blocking on critical/high severity
- Two-person review — all code changes require independent review before merge
- Hermetic builds — reproducible build process on self-hosted CI/CD runners
- Encrypted workstations — full-disk encryption, anti-malware, remote wipe capability
EU AI Act readiness#
The EU AI Act is the world's first comprehensive AI regulation. It becomes fully applicable on August 2, 2026, and applies to any AI system deployed in or affecting people in the EU. The Act classifies AI systems into risk tiers — from minimal risk (most business agents) to high risk (agents making employment, credit, or safety decisions) — with different obligations at each level.
For most enterprise AI agent use cases, the requirements are manageable. Here's what applies.
Most enterprise agents are not high-risk#
The Act's strictest obligations (Articles 9–17) apply only to high-risk AI systems in specific domains: employment decisions, credit scoring, critical infrastructure, biometrics, law enforcement. An SRE agent monitoring Kubernetes, a support agent triaging tickets, or a code review agent are not high-risk.
High-risk classification depends on the use case, not the technology. The same platform can host high-risk agents (an HR agent making hiring recommendations) and non-high-risk agents (an SRE agent checking pod health).
Transparency obligations for all AI systems#
Article 50 applies broadly:
| Requirement | Lens Agents capability |
|---|---|
| Users know they're interacting with AI | Agent identity — every agent is a named, identifiable principal |
| Actions are traceable | Full audit trail across 7 surfaces |
| Human oversight is available | Autonomy levels + Agent Guide with user-only safety rules |
| AI-generated decisions are explainable | Audit trail includes tool calls, parameters, results |
Foundation for high-risk use cases#
When an agent does operate in a high-risk domain, Lens Agents provides the technical foundation:
| High-risk requirement | Lens Agents provides | Customer provides |
|---|---|---|
| Risk management (Art. 9) | Policy engine, spending controls, autonomy levels, sandbox | Organizational risk assessment |
| Data governance (Art. 10) | PII detection and masking (select customers), credential isolation, default-deny | Data quality processes |
| Technical documentation (Art. 11) | Workspace files, policy definitions, audit history | EU-required format documentation |
| Automatic logging (Art. 12) | Full audit trail, 7 surfaces, queryable, exportable | Log retention per regulations |
| Human oversight (Art. 14) | Autonomy levels, Agent Guide, approval workflows | Oversight procedures |
| Cybersecurity (Art. 15) | Sandbox isolation, credential isolation, token security | Vulnerability management |
The pattern: Lens Agents provides the technical controls. The customer provides the organizational processes. Neither alone is sufficient.
What's outside scope#
- GPAI model obligations (Articles 51–53) are the responsibility of model providers (Anthropic, OpenAI, Meta)
- Conformity assessment and EU declaration are regulatory processes the customer completes
- Incident reporting to authorities is an organizational process. Lens Agents provides audit data for investigation.
Data sovereignty#
- Deploy on your cloud, your region, your premises
- SaaS, self-hosted, or cloud marketplace deployment options
- Data residency controlled by deployment choice
- No data sent to Lens unless explicitly configured
Data protection (GDPR)#
- Data Processing Agreement (DPA) — available on request
- Sub-processor list — maintained, available on request
- Right to deletion — agent data can be deleted per agent or per user. Audit trail retained per regulatory requirements.
- Self-hosted option — for full control over data residency and processing
Incident response#
- Documented and tested incident response process (details available under NDA)
- 72-hour breach notification per GDPR requirements
- Security incident SLA available in Enterprise agreements
Security contact#
To report a security concern or vulnerability, email security@lenshq.io. Our security team will follow up directly.
Related#
- Security whitepaper — full technical security detail
- Security model — threat model and trust boundaries
- Audit trail — what's logged and how to query it