Error Handling
The proxy preserves upstream error types. If the LLM provider returns an error, you get the same exception you'd get without Prysm.
import openai
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "test"}],
)
except openai.AuthenticationError:
print("Invalid upstream API key (check project settings)")
except openai.RateLimitError:
print("Provider rate limited")
except openai.APIError as e:
print(f"API error: {e}")
Prysm-Specific Errors
| Status | Error | Cause |
|---|
401 | Invalid API key | The sk-prysm-* key is missing, malformed, or revoked |
403 | Request blocked | Security scan detected a high-threat request (injection, PII, policy violation) |
404 | Project not found | The API key doesn't belong to any active project |
429 | Rate limited | Free tier limit reached (10K requests/month) |
502 | Provider error | Upstream provider returned an error or is unreachable |
504 | Timeout | Request to the upstream provider timed out |
Troubleshooting
| Symptom | Cause | Fix |
|---|
401 on every request | Wrong API key format | Ensure key starts with sk-prysm- |
502 on first request | Provider not configured | Add provider API key in Settings |
| Traces missing in dashboard | Key mismatch | Verify the key in your code matches dashboard |
| High latency | Cold start or provider slowness | Check provider status page; first request is slower |
403 unexpected blocks | Security too aggressive | Lower threat threshold in Security settings |