Advanced Features
Tool Calling / Function Calling
Prysm captures tool calls from OpenAI, Anthropic, and Google Gemini models. When a model returns tool calls, they're stored in the trace and displayed in the Request Explorer.
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the weather in London?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a city",
"parameters": {
"type": "object",
"properties": {"city": {"type": "string"}},
"required": ["city"]
}
}
}],
)
Logprobs
Request logprobs from OpenAI and they'll be captured in the trace.
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "The capital of France is"}],
logprobs=True,
top_logprobs=3,
)
What Gets Captured
| Field | Description |
|---|---|
| Model | Which model was called |
| Provider | The upstream provider |
| Latency | Total request duration in milliseconds |
| TTFT | Time to first token (streaming only) |
| Prompt tokens | Input token count |
| Completion tokens | Output token count |
| Cost | Calculated cost based on model pricing (USD) |
| Status | success, error, or timeout |
| Request body | Full messages array, tools, and parameters |
| Response body | Complete model response |
| Tool calls | Function/tool calls returned by the model |
| Logprobs | Token log probabilities (if requested) |
| User ID | From header or prysm_context |
| Session ID | From header or prysm_context |
| Custom metadata | From header or prysm_context |
| Finish reason | stop, length, tool_calls, or content_filter |