Quickstart#
Capture your first large language model (LLM) trace in Lens Loop in less than five minutes.
Prerequisites#
Before you begin, ensure that you have the following:
- Access to Lens Loop: Closed Beta. Make a request at lenshq.io
- A Lens ID. Sign up, if you have not already registered
- An LLM API key for OpenAI, Anthropic, or another supported provider
- An application that makes LLM API calls
Install and sign in#
- After requesting access, check your mailbox for the invitation email.
- Follow the link in the email and download Lens Loop for your operating system.
- Complete the installation process and start Lens Loop.
- Sign in with your Lens ID, or create one when prompted.
Info
For platform-specific instructions, see Install Lens Loop.
Connect your application#
To capture traffic, route your LLM API requests through the Lens Loop local proxy. This requires two configuration changes in your application:
- Set the API base URL to
http://localhost:31300/openai/v1. - Add the
X-Loop-ProjectHTTP header and specify a project name.
If the project does not exist, Lens Loop creates it automatically.
import os
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:31300/openai/v1", # (1)!
api_key=os.environ.get("OPENAI_API_KEY"),
default_headers={
"X-Loop-Project": "hello-loop", # (2)!
}
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Say hello!"}]
)
print(response.choices[0].message.content)
- Routes traffic through Lens Loop's local gateway
- Tags this trace with your project name
import OpenAI from "openai";
const openai = new OpenAI({
baseURL: "http://localhost:31300/openai/v1", // (1)!
apiKey: process.env.OPENAI_API_KEY,
defaultHeaders: {
"X-Loop-Project": "hello-loop", // (2)!
},
});
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Say hello!" }],
});
console.log(response.choices[0].message.content);
- Routes traffic through the Lens Loop local gateway
- Tags this trace with your project name-->
Use your AI assistant
See Set up with the AI assistant to configure your application with the AI assistant.
View the trace#
After your application sends a request, the trace appears in Lens Loop.
- Open Lens Loop.
- In the Navigator, select your project.
- Click Traces to view captured requests.
- Select a trace to review details such as:
- Prompt and response content
- Token usage and estimated cost
- Latency breakdown
- Model name and request parameters
Completed
You have captured your first LLM trace. All requests routed through Lens Loop are now observable, searchable, and available for analysis.
Next steps
-
Route LLM API calls from your application through Lens Loop to capture and observe request and response data.
-
Deploy a shared Loop Server so your whole team can see traces together.
-
Explore the Navigator, the Traces view, and the Details panel.