Monitoring & Traces
Track performance, debug issues, and optimize your AI workflows.
Monitoring & Traces provides visibility into your workflow execution, helping you track performance, debug issues, and optimize costs.
Key Components
Traces
End-to-end visibility into workflow execution:
- See every step of a workflow run
- Track timing for each node
- View inputs and outputs at each stage
- Identify bottlenecks and failures
Metrics
Quantitative data about your workflows:
| Metric | What It Shows |
|---|---|
| Execution Count | How many times workflows run |
| Success Rate | Percentage of successful completions |
| Latency | Time to complete (p50, p95, p99) |
| Token Usage | LLM tokens consumed |
| Cost | Estimated spend on AI providers |
Evals
Quality assessment of AI outputs:
- Response relevance scoring
- Hallucination detection
- Consistency checks
- Custom evaluation criteria
Using Traces for Debugging
When a workflow fails or produces unexpected results:
- Find the run — Locate the specific execution
- View the trace — See the full execution path
- Inspect nodes — Check inputs/outputs at each step
- Identify the issue — Find where things went wrong
- Fix and retest — Update the workflow and verify
Cost Tracking
Monitor AI spending across your workflows:
- Token usage by model
- Cost breakdown by workflow
- Trends over time
- Budget alerts
Performance Optimization
Use monitoring data to improve workflows:
- Identify slow nodes
- Optimize prompt lengths
- Choose efficient models
- Cache repeated operations
Alerting
Set up notifications for:
- Workflow failures
- High error rates
- Cost thresholds
- Performance degradation