Skip to content

EU AI Act Readiness#

The EU AI Act is the world's first comprehensive AI regulation. It applies progressively between 2024 and 2027. The majority of enterprise-relevant provisions — including high-risk obligations under Annex III and the transparency requirements of Article 50 — apply from August 2, 2026. Full application, including high-risk AI components in regulated products (Annex I), follows on August 2, 2027. The Act applies to any AI system deployed in or affecting people in the EU. This page explains how the Act applies to enterprise AI agents and what Lens Agents provides to support compliance.


How the Act works#

The EU AI Act classifies AI systems into risk tiers. Obligations increase with risk level:

Risk tier Examples Key obligations
Minimal risk Spam filters, inventory management, most business automation Transparency (Article 50)
Limited risk Chatbots interacting with people Transparency + disclosure that users interact with AI
High risk Employment decisions, credit scoring, critical infrastructure control, biometric identification Full compliance: risk management, data governance, logging, human oversight, cybersecurity (Articles 9-17)
Unacceptable risk Social scoring, real-time biometric surveillance (with exceptions) Prohibited

The classification depends on the use case, not the technology. The same platform can host high-risk agents (an HR agent making hiring recommendations) and minimal-risk agents (an SRE agent checking pod health).


Most enterprise agents are not high-risk#

The Act's strictest obligations (Articles 9-17) apply only to AI systems in specific high-risk domains defined in Annex III:

  • Employment and worker management
  • Credit and insurance assessment
  • Critical infrastructure management and operation
  • Biometric identification and categorization
  • Law enforcement and border control
  • Education and vocational training access

An SRE agent monitoring Kubernetes, a support agent triaging tickets, a code review agent, or a cost optimization agent are not high-risk under the Act. They fall under minimal or limited risk, where the primary obligation is transparency.


Transparency obligations (Article 50)#

Article 50 applies broadly to all AI systems. These are the baseline requirements:

Requirement What it means Lens Agents capability
Users know they interact with AI People must be informed when they are interacting with an AI system Agent identity -- every agent is a named, identifiable principal in the audit trail
Actions are traceable AI actions must be attributable and reviewable Full audit trail across 7 interaction surfaces with actor, action, resource, result, and timestamp
Human oversight is available Humans can intervene in or override AI decisions Autonomy levels (1-5 scale) control agent independence. Agent Guide provides user-only safety rules agents cannot modify
Decisions are explainable It must be possible to understand why an AI system made a decision Audit trail records tool calls, parameters, and results. Full conversation history is retained

For most enterprise agent deployments, meeting these four transparency requirements is the primary compliance task. Lens Agents provides the technical controls for all four.


Foundation for high-risk use cases#

When an agent operates in a high-risk domain, the Act imposes additional technical and organizational requirements. Lens Agents provides the technical foundation; the customer provides the organizational processes.

Risk management (Article 9)#

The Act requires a risk management system throughout the AI system's lifecycle.

  • Lens Agents provides: policy engine with default-deny networking, spending controls, autonomy levels (behavioral boundaries), sandbox isolation (technical boundaries), integration controls
  • Customer provides: organizational risk assessment, risk documentation, ongoing risk monitoring processes

Data governance (Article 10)#

Training and operational data must meet quality, relevance, and representativeness criteria.

  • Lens Agents provides: PII detection and masking (available to select customers), credential isolation (agents never see raw secrets), domain-level access controls, proxy-mediated data flow
  • Customer provides: data quality processes, dataset documentation, bias monitoring

Technical documentation (Article 11)#

High-risk systems require detailed technical documentation before deployment.

  • Lens Agents provides: workspace files (agent configuration), policy definitions (exportable), audit history (queryable and exportable)
  • Customer provides: documentation in the EU-required format, system descriptions, conformity assessment records

Automatic logging (Article 12)#

High-risk AI systems must automatically record events for traceability.

  • Lens Agents provides: full audit trail across 7 interaction surfaces -- MCP tool calls, Kubernetes proxy, shell commands, forward proxy, sandbox operations, AWS proxy, LLM proxy. Queryable via dashboard and API. Exportable for external compliance tooling.
  • Customer provides: log retention policies per applicable regulations, log monitoring procedures

Human oversight (Article 14)#

High-risk systems must be designed to allow effective human oversight.

  • Lens Agents provides: autonomy levels (observer to autonomous), Agent Guide (user-editable safety rules agents cannot modify), approval workflows, real-time conversation monitoring
  • Customer provides: oversight procedures, designated human overseers, escalation processes

Cybersecurity (Article 15)#

High-risk systems must achieve an appropriate level of accuracy, robustness, and cybersecurity.

  • Lens Agents provides: sandbox isolation, server-side credential injection, default-deny networking, kernel-level network restrictions, ephemeral CAs; operates under Mirantis's SOC 2 Type 1 (Lens K8S IDE) and ISO 27001 control framework
  • Customer provides: vulnerability management processes, incident response, penetration testing of their deployment

What is outside scope#

Some EU AI Act obligations fall outside the platform's responsibility:

  • GPAI model obligations (Articles 51-53) are the responsibility of the model providers (Anthropic, OpenAI, Meta). These cover model training, evaluation, and systemic risk assessment.
  • Conformity assessment and EU declaration are regulatory processes the deploying organization completes.
  • Incident reporting to authorities is an organizational process. Lens Agents provides the audit data and event history needed for investigation and reporting.
  • Fundamental rights impact assessment (required for certain high-risk deployments by public bodies) is the deployer's responsibility.

Timeline#

Date Milestone
August 1, 2024 Act entered into force
February 2, 2025 Prohibitions on unacceptable-risk AI practices (Article 5); AI literacy requirements (Article 4) apply
August 2, 2025 GPAI (general-purpose AI) model obligations; governance framework; penalties applicable
August 2, 2026 Most provisions apply — high-risk obligations for Annex III systems (employment, credit, critical infrastructure, etc.); transparency requirements (Article 50); AI regulatory sandbox availability in each Member State
August 2, 2027 Full application — high-risk AI systems embedded in regulated products (Annex I: medical devices, machinery, toys, etc.); legacy GPAI providers' compliance deadline; AI components of large-scale EU IT systems must be compliant

Sources: EU AI Act Article 113 (entry into force and application) · European Commission AI Act Service Desk — implementation timeline.