Cost Explorer#
Cost Explorer provides usage summaries, time-series breakdowns, and interactive drill-down from organization to team to agent. It gives administrators and team leads visibility into LLM spending across all managed agents.
What it shows#
Summary cards#
At the top of the Cost Explorer, four summary cards show aggregate metrics for the selected time range:
- Total cost — aggregate LLM spending across all agents in scope
- Total requests — number of LLM requests made
- Average cost per request — total cost divided by request count
- Cache hit rate — percentage of token reads served from prompt cache (higher is better for cost efficiency)
Time-series chart#
A stacked area chart shows spending over time, broken down by team or by agent. This reveals trends — cost increasing after a new agent was deployed, spending spikes during incident response, gradual efficiency improvements from prompt caching.
Breakdown tables#
Tabular breakdowns by multiple dimensions:
| Breakdown | What it shows |
|---|---|
| By team | Total cost per team, sorted by spend |
| By agent | Total cost per agent within a team |
| By model | Cost split across models (e.g., Claude vs. Claude Haiku) |
| By provider | Cost split across LLM providers |
Drill-down#
Cost Explorer supports interactive drill-down:
- Organization level — see total spending across all teams
- Click a team — filter to that team's agents, see per-agent breakdown
- Click an agent — see the agent's cost history, model usage, and request patterns
Each level shows the same summary cards and time-series chart, scoped to the selected entity.
Time ranges#
| Range | Use for |
|---|---|
| 7 days | Recent activity and short-term trends |
| 30 days | Monthly cost tracking and budget monitoring |
| 90 days | Quarterly reviews and long-term trend analysis |
Period comparison#
Cost Explorer can compare the current period against the previous period of the same length. This shows whether spending is increasing or decreasing and by how much. For example, selecting "30 days" with comparison enabled shows the current 30-day spend alongside the previous 30-day spend, with percentage change.
Data source#
Cost data comes from the LLM proxy that manages all model requests for managed agents. Every request is tracked with:
- Input tokens — tokens sent to the model
- Output tokens — tokens generated by the model
- Cache read tokens — tokens served from prompt cache (reduced cost)
- Cache write tokens — tokens written to prompt cache
- Model — which model was used
- Provider — which LLM provider served the request
- Calculated cost — cost at the provider's per-token rates (no markup)
Data is available in near real-time. There is no delay between a model request and its appearance in Cost Explorer.
Relationship to spending controls#
Cost Explorer shows what has been spent. Spending controls enforce limits on what can be spent.
| Cost Explorer | Spending controls | |
|---|---|---|
| Purpose | Visibility and analysis | Enforcement and limits |
| Scope | Organization, team, agent | Organization-wide and per-agent |
| Action | View and report | Block requests when limits are exceeded |
Use Cost Explorer to understand spending patterns. Use spending controls to set guardrails.
Access#
- Organization admins — full access to all teams and agents
- Team leads — access to their team's agents
- Team members — access to agents they own
Related#
- Spending controls — budget enforcement
- Audit trail — detailed action tracking beyond cost
- What Is an Agent-Hour? — billing unit measurement