Capture traffic#
Route LLM API calls from your application through Lens Loop to capture and observe request and response data.
Overview#
Lens Loop captures LLM traffic by acting as a local proxy. It records requests and responses and then forwards them to the target LLM provider, such as OpenAI or Anthropic.
Lens Loop runs a local proxy on port 31300. Configure your application to send LLM requests to this proxy instead of directly to the provider. In most cases, you do not need to change your application logic. Update only the base URL and, if required, add HTTP headers.
graph LR
A["Your application"] --> B["Lens Loop (localhost:31300)"]
B --> C["LLM Provider"]
Info
If you are connected to a remote environment, the LLM traffic is sent through OpenTelemetry. So, there is no need in setting up a Lens Loop base URL in your application code.
Understanding API formats#
Lens Loop captures traffic based on the API format that your application uses, not on the SDK that you install.
Many LLM providers support multiple API formats. For example:
-
OpenAI SDK
- Chat Completions API
- Responses API
-
Anthropic SDK
- Messages API
- Chat Completions API
-
OpenAI SDK
- Anthropic, Gemini, and other providers through OpenAI-compatible endpoints
-
Google Gen AI SDK
- Gemini, and other providers
When you configure Lens Loop, select the endpoint that matches the API format your code uses.
| API format | Description | Providers |
|---|---|---|
| Chat Completions | OpenAI universal format, widely adopted | OpenAI, Anthropic, Gemini, OpenRouter, Azure, Local LLMs |
| Messages | Anthropic native format | Anthropic |
| Responses | OpenAI newer API format | OpenAI |
| OpenTelemetry | Applications with OTEL instrumentation | Any |
Choose the correct endpoint#
Use the following table to identify the correct Lens Loop base URL for your SDK and API method.
| SDK | API method | Lens Loop base URL |
|---|---|---|
| OpenAI | chat.completions.create() |
http://localhost:31300/openai/v1 |
| OpenAI | responses.create() |
http://localhost:31300/openai/v1 |
| Anthropic | messages.create() |
http://localhost:31300/anthropic |
| Google Gen AI | models.generate_content() |
http://localhost:31300/gemini |
| OpenAI to Anthropic | chat.completions.create() |
http://localhost:31300/anthropic/v1 |
| OpenAI to Gemini | chat.completions.create() |
http://localhost:31300/gemini/v1beta/openai |
| OpenAI to OpenRouter | chat.completions.create() |
http://localhost:31300/openrouter/api/v1 |
| OpenAI to Azure | chat.completions.create() |
http://localhost:31300/azure/{resource} |
| OpenAI to Local LLM | chat.completions.create() |
http://localhost:31300/openai/http/{host}:{port}/v1 |
| Any OTEL SDK | OTLP export | http://localhost:31300/otel/v1/traces |
Supported endpoints by API format#
Use this format with client.chat.completions.create() in the OpenAI SDK.
| Provider | Base URL | Comments |
|---|---|---|
| OpenAI | http://localhost:31300/openai/v1 |
Replace {resource} with your Azure resource name. For example: http://localhost:31300/azure/my-openai-resource |
| Anthropic (OpenAI-compatible) | http://localhost:31300/anthropic/v1 |
|
| Google Gemini | http://localhost:31300/gemini/v1beta/openai |
Lens Loop captures Gemini traffic through Google OpenAI-compatible endpoint. Use the OpenAI SDK instead of the Gemini SDK. |
| OpenRouter | http://localhost:31300/openrouter/api/v1 |
|
| Azure OpenAI | http://localhost:31300/azure/{resource} |
|
| Google Gen AI | http://localhost:31300/gemini |
Use this format with client.messages.create() in the Anthropic SDK.
| Provider | Base URL |
|---|---|
| Anthropic | http://localhost:31300/anthropic |
Use this format with client.responses.create() in the OpenAI SDK.
| Provider | Base URL |
|---|---|
| OpenAI | http://localhost:31300/openai/v1 |
Use this option if your application is already instrumented with OpenTelemetry. Send OTLP traces directly to Lens Loop.
| Endpoint | URL |
|---|---|
| OTLP Traces | http://localhost:31300/otel/v1/traces |
Use local AI servers#
Lens Loop supports OpenAI-compatible local servers by using the following URL pattern:
http://localhost:31300/openai/http/{host}:{port}/v1
See several common examples below:
Default port: 11434
http://localhost:31300/openai/http/localhost:11434/v1
Default port: 8080
http://localhost:31300/openai/http/localhost:8080/v1
Default port: 1234
http://localhost:31300/openai/http/127.0.0.1:1234/v1
Replace {host}:{port} with your server's address.
Configure HTTP headers#
Use HTTP headers to group and correlate captured traces. The HTTP headers improve the display of traces in the Lens Loop interface.
| Header | Purpose | Required |
|---|---|---|
X-Loop-Project |
Groups traces by project name. Creates the project if it does not exist. | Recommended |
X-Loop-TraceID |
Groups multiple spans into a single trace | Optional |
X-Loop-ParentID |
Defines parent-child relationships between spans | Optional |
X-Loop-Custom-Label |
Adds custom labels for filtering and grouping | Optional |
Without the X-Loop-Project header, traces are assigned and displayed under the Default project. For workflows that include multiple LLM calls, use X-Loop-TraceID to group related calls and X-Loop-ParentID to define the call hierarchy.
Code Examples#
Find configuration examples for various SDKs, API formats, and programming languages in the following section.
OpenAI SDK (Chat Completions API)#
import os
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:31300/openai/v1", # (1)!
api_key=os.environ.get("OPENAI_API_KEY"),
default_headers={
"X-Loop-Project": "my-project", # (2)!
}
)
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello world!"}
]
)
print("Response:", completion.choices[0].message.content)
- Routes all requests through Lens Loop's local proxy
- Groups traces under "my-project" in Lens Loop
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:31300/openai/v1", // (1)!
apiKey: process.env.OPENAI_API_KEY,
defaultHeaders: {
"X-Loop-Project": "my-project", // (2)!
},
});
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello world!" },
],
});
console.log("Response:", completion.choices[0].message.content);
- Routes all requests through Lens Loop's local proxy
- Groups traces under "my-project" in Lens Loop
using OpenAI.Chat;
var apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");
var client = new ChatClient(
model: "gpt-4o-mini",
credential: apiKey,
options: new OpenAIClientOptions
{
Endpoint = new Uri("http://localhost:31300/openai/v1") // (1)!
}
);
var completion = client.CompleteChat(
[
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("Hello world!")
],
new ChatCompletionOptions
{
AdditionalHeaders = new Dictionary<string, string>
{
{ "X-Loop-Project", "my-project" } // (2)!
}
});
Console.WriteLine($"Response: {completion.Content[0].Text}");
- Routes all requests through Lens Loop's local proxy
- Groups traces under "my-project" in Lens Loop
OpenAI SDK (Responses API)#
import os
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:31300/openai/v1",
api_key=os.environ.get("OPENAI_API_KEY"),
default_headers={
"X-Loop-Project": "my-project",
}
)
response = client.responses.create(
model="gpt-4o-mini",
input="Write a haiku about observability."
)
print("Response:", response.output_text)
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:31300/openai/v1",
apiKey: process.env.OPENAI_API_KEY,
defaultHeaders: {
"X-Loop-Project": "my-project",
},
});
const response = await openai.responses.create({
model: "gpt-4o-mini",
input: "Write a haiku about observability.",
});
console.log("Response:", response.output_text);
Anthropic SDK (Messages API)#
import os
from anthropic import Anthropic
client = Anthropic(
base_url="http://localhost:31300/anthropic", # (1)!
api_key=os.environ.get("ANTHROPIC_API_KEY"),
default_headers={
"X-Loop-Project": "my-project",
}
)
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello world!"}
]
)
print("Response:", message.content[0].text)
- Note: Use
/anthropic(not/anthropic/v1) — the SDK appends the API path automatically
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic({
baseURL: "http://localhost:31300/anthropic", // (1)!
apiKey: process.env.ANTHROPIC_API_KEY,
defaultHeaders: {
"X-Loop-Project": "my-project",
},
});
const message = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
messages: [
{ role: "user", content: "Hello world!" }
],
});
console.log("Response:", message.content[0].text);
- Note: Use
/anthropic(not/anthropic/v1) — the SDK appends the API path automatically
OpenAI SDK to Anthropic (Chat Completions)#
Use the OpenAI SDK to call Anthropic models via their OpenAI-compatible endpoint:
import os
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:31300/anthropic/v1", # (1)!
api_key=os.environ.get("ANTHROPIC_API_KEY"),
default_headers={
"X-Loop-Project": "my-project",
}
)
completion = client.chat.completions.create(
model="claude-sonnet-4-20250514",
messages=[
{"role": "user", "content": "Hello world!"}
]
)
print("Response:", completion.choices[0].message.content)
- Use
/anthropic/v1when using OpenAI SDK with Anthropic's Chat Completions endpoint
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:31300/anthropic/v1", // (1)!
apiKey: process.env.ANTHROPIC_API_KEY,
defaultHeaders: {
"X-Loop-Project": "my-project",
},
});
const completion = await openai.chat.completions.create({
model: "claude-sonnet-4-20250514",
messages: [
{ role: "user", content: "Hello world!" }
],
});
console.log("Response:", completion.choices[0].message.content);
- Use
/anthropic/v1when using OpenAI SDK with Anthropic's Chat Completions endpoint
OpenAI SDK to Gemini#
Use the OpenAI SDK to call Gemini models via Google's OpenAI-compatible endpoint:
import os
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:31300/gemini/v1beta/openai",
api_key=os.environ.get("GOOGLE_API_KEY"),
default_headers={
"X-Loop-Project": "my-project",
}
)
completion = client.chat.completions.create(
model="gemini-2.0-flash",
messages=[
{"role": "user", "content": "Hello world!"}
]
)
print("Response:", completion.choices[0].message.content)
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:31300/gemini/v1beta/openai",
apiKey: process.env.GOOGLE_API_KEY,
defaultHeaders: {
"X-Loop-Project": "my-project",
},
});
const completion = await openai.chat.completions.create({
model: "gemini-2.0-flash",
messages: [
{ role: "user", content: "Hello world!" }
],
});
console.log("Response:", completion.choices[0].message.content);
See also
-
Share traces across your team by deploying a remote Loop Server.
-
Explore the Navigator, the Traces view, and the Details panel.