Skip to content

Capture Traffic#

Route your application's LLM API calls through Lens Loop to capture traces with full observability — prompts, responses, latency, token usage, and costs.


How It Works#

Lens Loop runs a local proxy on port 31300. Point your application's base URL to this proxy instead of directly to your LLM provider. Loop captures the request and response, then forwards everything to the provider transparently.

Your App  →  Lens Loop (localhost:31300)  →  LLM Provider (OpenAI, Anthropic, etc.)

No code changes required beyond configuration

Your existing code continues to work — you only change the base URL and optionally add headers for organization.


Understanding API Formats#

Loop captures traffic based on the API format you're using, not which SDK you installed.

Most LLM providers support multiple API formats. For example:

  • OpenAI SDK can call either the Chat Completions API or the Responses API
  • Anthropic SDK can call either the Messages API or the Chat Completions API
  • OpenAI SDK can also call Anthropic, Gemini, and other providers via their OpenAI-compatible endpoints

When configuring Loop, pick the endpoint that matches the API format your code is using.

API Format Description Providers
Chat Completions OpenAI's universal format, widely adopted OpenAI, Anthropic, Gemini, OpenRouter, Azure, Local LLMs
Messages Anthropic's native format Anthropic
Responses OpenAI's newer API format OpenAI
OpenTelemetry For apps with OTEL instrumentation Any

Quick Reference#

Use this table to find the right Loop endpoint based on what SDK and API method you're using:

SDK API Method Loop Base URL
OpenAI chat.completions.create() http://localhost:31300/openai/v1
OpenAI responses.create() http://localhost:31300/openai/v1
Anthropic messages.create() http://localhost:31300/anthropic
OpenAI → Anthropic chat.completions.create() http://localhost:31300/anthropic/v1
OpenAI → Gemini chat.completions.create() http://localhost:31300/gemini/v1beta/openai
OpenAI → OpenRouter chat.completions.create() http://localhost:31300/openrouter/api/v1
OpenAI → Azure chat.completions.create() http://localhost:31300/azure/{resource}
OpenAI → Local LLM chat.completions.create() http://localhost:31300/openai/http/{host}:{port}/v1
Any OTEL SDK OTLP export http://localhost:31300/otel/v1/traces

Supported Endpoints by API Format#

Chat Completions API#

The most widely supported format. Use with client.chat.completions.create() in the OpenAI SDK.

Provider Base URL
OpenAI http://localhost:31300/openai/v1
Anthropic (OpenAI-compatible) http://localhost:31300/anthropic/v1
Google Gemini http://localhost:31300/gemini/v1beta/openai
OpenRouter http://localhost:31300/openrouter/api/v1
Azure OpenAI http://localhost:31300/azure/{resource}

Azure OpenAI

Replace {resource} with your Azure resource name. For example: http://localhost:31300/azure/my-openai-resource

Gemini

Loop captures Gemini traffic via Google's OpenAI-compatible endpoint. Use the OpenAI SDK, not the Gemini SDK.

Messages API#

Anthropic's native format. Use with client.messages.create() in the Anthropic SDK.

Provider Base URL
Anthropic http://localhost:31300/anthropic

Responses API#

OpenAI's newer API format. Use with client.responses.create() in the OpenAI SDK.

Provider Base URL
OpenAI http://localhost:31300/openai/v1

OpenTelemetry#

For applications already instrumented with OpenTelemetry. Send OTLP traces directly to Loop.

Endpoint URL
OTLP Traces http://localhost:31300/otel/v1/traces

Local AI Servers#

Lens Loop works with OpenAI-compatible local servers using this pattern:

http://localhost:31300/openai/http/{host}:{port}/v1

Common Configurations#

Default port: 11434

http://localhost:31300/openai/http/localhost:11434/v1

Default port: 8080

http://localhost:31300/openai/http/localhost:8080/v1

Default port: 1234

http://localhost:31300/openai/http/127.0.0.1:1234/v1

Replace {host}:{port} with your server's address.


Loop Headers#

Use HTTP headers to organize your captured traces:

Header Purpose Required
X-Loop-Project Groups traces by project name. Creates the project automatically if it doesn't exist. Recommended
X-Loop-TraceID Links multiple spans into a single trace Optional
X-Loop-ParentID Establishes parent-child relationships between spans Optional
X-Loop-Custom-Label Adds custom labels for filtering and grouping Optional

Using headers for multi-step workflows

When your application makes multiple LLM calls as part of a single workflow (like a chain or agent), use X-Loop-TraceID to group them together and X-Loop-ParentID to show the hierarchy.


Code Examples#

OpenAI SDK — Chat Completions API#

import os
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:31300/openai/v1",  # (1)!
    api_key=os.environ.get("OPENAI_API_KEY"),
    default_headers={
        "X-Loop-Project": "my-project",  # (2)!
    }
)

completion = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello world!"}
    ]
)

print("Response:", completion.choices[0].message.content)
  1. Routes all requests through Lens Loop's local proxy
  2. Groups traces under "my-project" in Lens Loop
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:31300/openai/v1", // (1)!
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "my-project", // (2)!
  },
});

const completion = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Hello world!" },
  ],
});

console.log("Response:", completion.choices[0].message.content);
  1. Routes all requests through Lens Loop's local proxy
  2. Groups traces under "my-project" in Lens Loop
using OpenAI.Chat;

var apiKey = Environment.GetEnvironmentVariable("OPENAI_API_KEY");

var client = new ChatClient(
    model: "gpt-4o-mini",
    credential: apiKey,
    options: new OpenAIClientOptions
    {
        Endpoint = new Uri("http://localhost:31300/openai/v1") // (1)!
    }
);

var completion = client.CompleteChat(
[
    new SystemChatMessage("You are a helpful assistant."),
    new UserChatMessage("Hello world!")
],
new ChatCompletionOptions
{
    AdditionalHeaders = new Dictionary<string, string>
    {
        { "X-Loop-Project", "my-project" } // (2)!
    }
});

Console.WriteLine($"Response: {completion.Content[0].Text}");
  1. Routes all requests through Lens Loop's local proxy
  2. Groups traces under "my-project" in Lens Loop

OpenAI SDK — Responses API#

import os
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:31300/openai/v1",
    api_key=os.environ.get("OPENAI_API_KEY"),
    default_headers={
        "X-Loop-Project": "my-project",
    }
)

response = client.responses.create(
    model="gpt-4o-mini",
    input="Write a haiku about observability."
)

print("Response:", response.output_text)
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:31300/openai/v1",
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "my-project",
  },
});

const response = await openai.responses.create({
  model: "gpt-4o-mini",
  input: "Write a haiku about observability.",
});

console.log("Response:", response.output_text);

Anthropic SDK — Messages API#

import os
from anthropic import Anthropic

client = Anthropic(
    base_url="http://localhost:31300/anthropic",  # (1)!
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
    default_headers={
        "X-Loop-Project": "my-project",
    }
)

message = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Hello world!"}
    ]
)

print("Response:", message.content[0].text)
  1. Note: Use /anthropic (not /anthropic/v1) — the SDK appends the API path automatically
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({
  baseURL: "http://localhost:31300/anthropic", // (1)!
  apiKey: process.env.ANTHROPIC_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "my-project",
  },
});

const message = await client.messages.create({
  model: "claude-sonnet-4-20250514",
  max_tokens: 1024,
  messages: [
    { role: "user", content: "Hello world!" }
  ],
});

console.log("Response:", message.content[0].text);
  1. Note: Use /anthropic (not /anthropic/v1) — the SDK appends the API path automatically

OpenAI SDK → Anthropic (Chat Completions)#

Use the OpenAI SDK to call Anthropic models via their OpenAI-compatible endpoint:

import os
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:31300/anthropic/v1",  # (1)!
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
    default_headers={
        "X-Loop-Project": "my-project",
    }
)

completion = client.chat.completions.create(
    model="claude-sonnet-4-20250514",
    messages=[
        {"role": "user", "content": "Hello world!"}
    ]
)

print("Response:", completion.choices[0].message.content)
  1. Use /anthropic/v1 when using OpenAI SDK with Anthropic's Chat Completions endpoint
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:31300/anthropic/v1", // (1)!
  apiKey: process.env.ANTHROPIC_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "my-project",
  },
});

const completion = await openai.chat.completions.create({
  model: "claude-sonnet-4-20250514",
  messages: [
    { role: "user", content: "Hello world!" }
  ],
});

console.log("Response:", completion.choices[0].message.content);
  1. Use /anthropic/v1 when using OpenAI SDK with Anthropic's Chat Completions endpoint

OpenAI SDK → Gemini#

Use the OpenAI SDK to call Gemini models via Google's OpenAI-compatible endpoint:

import os
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:31300/gemini/v1beta/openai",
    api_key=os.environ.get("GOOGLE_API_KEY"),
    default_headers={
        "X-Loop-Project": "my-project",
    }
)

completion = client.chat.completions.create(
    model="gemini-2.0-flash",
    messages=[
        {"role": "user", "content": "Hello world!"}
    ]
)

print("Response:", completion.choices[0].message.content)
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:31300/gemini/v1beta/openai",
  apiKey: process.env.GOOGLE_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "my-project",
  },
});

const completion = await openai.chat.completions.create({
  model: "gemini-2.0-flash",
  messages: [
    { role: "user", content: "Hello world!" }
  ],
});

console.log("Response:", completion.choices[0].message.content);

Video Walkthrough#

Watch how to capture traffic in Lens Loop:


Next Steps#

  • View Your Traces


    See captured requests in the Traces view with full details.

    Using Lens Loop

  • Share with Your Team


    Set up a remote environment to share traces across developers.

    Add Environment