Skip to content

Quickstart#

Capture your first large language model (LLM) trace in Lens Loop in less than five minutes.

Prerequisites#

Before you begin, ensure that you have the following:

  • Access to Lens Loop: Closed Beta. Make a request at lenshq.io
  • A Lens ID. Sign up, if you have not already registered
  • An LLM API key for OpenAI, Anthropic, or another supported provider
  • An application that makes LLM API calls

Install and sign in#

  1. After requesting access, check your mailbox for the invitation email.
  2. Follow the link in the email and download Lens Loop for your operating system.
  3. Complete the installation process and start Lens Loop.
  4. Sign in with your Lens ID, or create one when prompted.

Info

For platform-specific instructions, see Install Lens Loop.


Connect your application#

To capture traffic, route your LLM API requests through the Lens Loop local proxy. This requires two configuration changes in your application:

  1. Set the API base URL to http://localhost:31300/openai/v1.
  2. Add the X-Loop-Project HTTP header and specify a project name.

If the project does not exist, Lens Loop creates it automatically.

import os
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:31300/openai/v1",  # (1)!
    api_key=os.environ.get("OPENAI_API_KEY"),
    default_headers={
        "X-Loop-Project": "hello-loop",  # (2)!
    }
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Say hello!"}]
)
print(response.choices[0].message.content)
  1. Routes traffic through Lens Loop's local gateway
  2. Tags this trace with your project name
import OpenAI from "openai";

const openai = new OpenAI({
  baseURL: "http://localhost:31300/openai/v1", // (1)!
  apiKey: process.env.OPENAI_API_KEY,
  defaultHeaders: {
    "X-Loop-Project": "hello-loop", // (2)!
  },
});

const response = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Say hello!" }],
});
console.log(response.choices[0].message.content);
  1. Routes traffic through the Lens Loop local gateway
  2. Tags this trace with your project name-->
Use your AI assistant

See Set up with the AI assistant to configure your application with the AI assistant.


View the trace#

After your application sends a request, the trace appears in Lens Loop.

  1. Open Lens Loop.
  2. In the Navigator, select your project.
  3. Click Traces to view captured requests.
  4. Select a trace to review details such as:
    • Prompt and response content
    • Token usage and estimated cost
    • Latency breakdown
    • Model name and request parameters

Completed

You have captured your first LLM trace. All requests routed through Lens Loop are now observable, searchable, and available for analysis.


Next steps

  • Capture traffic


    Route LLM API calls from your application through Lens Loop to capture and observe request and response data.

  • Add a Remote Environment


    Deploy a shared Loop Server so your whole team can see traces together.

  • Using Lens Loop


    Explore the Navigator, the Traces view, and the Details panel.