Skip to content

Org-Wide AI Governance Rollout#

A step-by-step guide for platform engineers and IT leads rolling out Lens Agents across an organization. Covers the full journey from initial evaluation through SSO configuration, policy definition, team onboarding, desktop tool connection, managed agent creation, and adoption monitoring.


What you will build#

An org-wide deployment of Lens Agents where:

  • All users authenticate through your existing SSO provider
  • Teams are organized with project-level access grants and per-team policies
  • Engineers connect their desktop AI tools (Claude Code, Cursor) with governed access
  • Operations teams create managed agents for monitoring, triage, and automation
  • IT has full visibility through the audit trail and cost explorer
  • Spending controls prevent runaway costs at the org, team, and agent level

Phase 1: Set up the organization#

Configure SSO#

Lens Agents integrates with the organization's identity provider over OIDC — Okta, Microsoft Entra ID, JumpCloud, and any other OIDC-compliant provider. After SSO is configured, all users authenticate through the IdP. Lens Agents does not hold its own passwords; there are no local accounts.

See SSO for configuration details.

Create the org structure#

Plan your hierarchy before creating teams:

Organization: Acme Corp
├── Team: Platform Engineering
│   └── Projects: Production, Staging
├── Team: Backend Engineering
│   └── Projects: Production (member), Staging (admin)
├── Team: Frontend Engineering
│   └── Projects: Staging
├── Team: SRE
│   └── Projects: Production (admin), Staging (admin)
├── Team: Support Operations
│   └── Projects: Support Systems
└── Projects
    ├── Production (EKS clusters, AWS production account, GitHub)
    ├── Staging (EKS staging cluster, AWS staging account, GitHub)
    └── Support Systems (CRM, ticketing system connections)

Create projects under Organization then Projects. Connect your infrastructure to each project:

  • Kubernetes clusters — deploy the relay for each cluster (see Kubernetes connection)
  • AWS accounts — configure AWS connections with IAM roles
  • GitHub — connect your GitHub organization

Create teams under Organization then Teams. Grant each team access to the relevant projects with the appropriate role (admin or member).

See Organizations, teams & projects for the full hierarchy reference.


Phase 2: Define policies#

Each team gets its own policy that controls what agents (and desktop AI tools) in that team can access.

Example: SRE team policy#

*.eks.amazonaws.com          → allow
*.amazonaws.com              → allow
api.github.com               → allow
api.pagerduty.com            → allow
*                            → deny

With credential bindings: - AWS credentials injected for *.amazonaws.com - GitHub token injected for api.github.com - PagerDuty API key injected for api.pagerduty.com

Example: Support Operations team policy#

yourcompany.zendesk.com      → allow
api.salesforce.com           → allow
*.salesforce.com             → allow
*                            → deny

With HTTP-level restrictions:

yourcompany.zendesk.com
  GET    /api/v2/*               → allow
  POST   /api/v2/tickets         → allow
  PUT    /api/v2/tickets/*       → allow
  DELETE                         → deny

Policies are the hard security boundary — enforced at the kernel level by the sandbox, not just at the application level. See Policies for the full configuration reference.


Phase 3: Set spending controls#

Configure spending limits before onboarding users to prevent unexpected costs.

Org-level limit: set a monthly ceiling for the entire organization.

Team-level limits: set per-team budgets based on expected usage. For example: - SRE: $500/month (managed agents running heartbeats) - Backend Engineering: $200/month (desktop tool usage) - Frontend Engineering: $100/month (desktop tool usage) - Support Operations: $300/month (triage agent)

Limits are enforced actively — agents are stopped when their team's budget is reached. Desktop tools show a clear message explaining the limit.

Spending limits can be set at organization, team, or per-agent scope. See Spending controls for the full model.


Phase 4: Roll out desktop AI tools#

Start with a pilot group — a single team of 5-10 engineers. The setup takes under 5 minutes per person.

Engineer setup#

Each engineer configures their AI tool to use the organization's Lens Agents endpoint. On first use the engineer authenticates through the organization's SSO provider — the same identity they use for every other work tool. From that point on, every tool action is attributed to them in the audit trail and scoped by their team's policies.

See Desktop AI Tools for the supported tool list and the governance model.

Verify the pilot#

After the pilot group is connected, the audit trail should show actions attributed to each pilot engineer with the correct identity, and denied requests should appear for any systems outside the pilot team's policy scope. This confirms both governance and observability are working end-to-end before broader rollout.


Phase 5: Create managed agents#

With the platform configured and desktop tools connected, create managed agents for operational use cases:

SRE monitoring agent — monitors Kubernetes clusters, runs heartbeat checks, alerts in Slack:

Create an agent called SRE Monitor in the SRE team. It should
monitor production Kubernetes clusters and alert in #sre-alerts.

Support triage agent — connects to ticketing and CRM, triages incoming tickets:

Create an agent called Support Triage in the Support Operations
team. It should triage Zendesk tickets using Salesforce data.

Each managed agent inherits its team's policies and project access. No additional configuration is needed for access control — the policies you defined in Phase 2 apply automatically.

See Creating a managed agent for the full creation workflow.


Phase 6: Monitor adoption#

Once the rollout is live, track adoption and governance health.

Audit trail#

The audit trail surfaces:

  • Active agents and users — who is using the platform
  • Action types — what tools and systems are being accessed
  • Denied requests — policy enforcement working as expected
  • Error rates — any connectivity or configuration issues

Cost explorer#

The Cost Explorer surfaces:

  • Spending by team — whether teams are within their budgets
  • Spending by agent — which managed agents are most active
  • Model usage — token consumption and cost by model and provider
  • Trends — whether usage is growing, stable, or spiking

Iterate on policies#

As usage data comes in, refine policies. Add domains teams need, adjust spending limits for active agents, and tighten policies if you see unexpected access patterns.


Rollout timeline#

Week Milestone
1 SSO, org structure, policies, spending controls
2 Pilot team connected, audit trail verified
3 First managed agents, remaining teams onboarded
4 Full adoption monitoring via audit trail and cost explorer
Ongoing Policy refinement based on usage data

What You Get#

  • All AI agent usage governed through one platform — desktop tools, external agents, and managed agents
  • OIDC SSO authentication through your existing identity provider
  • Team-based policies that control access at the domain and HTTP method level
  • Spending controls that prevent cost surprises
  • Full audit trail of every action across every agent type
  • Cost visibility broken down by team, agent, and model
  • A repeatable process for onboarding new teams and agents