- API Type
- REST API with endpoints like /v1/metric-collections, /v1/evaluate for running LLM evaluations on test cases, traces, spans, and threads
- Authentication
- API Key authentication required on every request (CONFIDENT_API_KEY header). Supports Organization-level (manage teams, billing, projects) and Project-level (datasets, prompts, traces) keys. SSO available for Enterprise
- Webhooks
- No webhook support mentioned in documentation
- SDKs
- Python (deepeval), TypeScript (deepeval-ts), integrates with LangChain callbacks, AI SDK, and other observability frameworks
- Documentation
- Good - comprehensive API reference with quickstart examples, authentication guide at confident-ai.com/docs/api-reference
- Sandbox
- Free account provides access to full platform for testing with quota tracking
- SLA
- Rate Limits
- Usage tracked against quotas per API key, specific limits not detailed in public docs
- Use Cases
- Run online evaluations, create custom datasets/prompts, human annotations, production tracing, CI/CD regression testing, experiment with prompts/models