Monitoring & Traces

Track performance, debug issues, and optimize your AI workflows.

Monitoring & Traces provides visibility into your workflow execution, helping you track performance, debug issues, and optimize costs.

Key Components

Traces

End-to-end visibility into workflow execution:

  • See every step of a workflow run
  • Track timing for each node
  • View inputs and outputs at each stage
  • Identify bottlenecks and failures

Metrics

Quantitative data about your workflows:

MetricWhat It Shows
Execution CountHow many times workflows run
Success RatePercentage of successful completions
LatencyTime to complete (p50, p95, p99)
Token UsageLLM tokens consumed
CostEstimated spend on AI providers

Evals

Quality assessment of AI outputs:

  • Response relevance scoring
  • Hallucination detection
  • Consistency checks
  • Custom evaluation criteria

Using Traces for Debugging

When a workflow fails or produces unexpected results:

  1. Find the run — Locate the specific execution
  2. View the trace — See the full execution path
  3. Inspect nodes — Check inputs/outputs at each step
  4. Identify the issue — Find where things went wrong
  5. Fix and retest — Update the workflow and verify

Cost Tracking

Monitor AI spending across your workflows:

  • Token usage by model
  • Cost breakdown by workflow
  • Trends over time
  • Budget alerts

Performance Optimization

Use monitoring data to improve workflows:

  • Identify slow nodes
  • Optimize prompt lengths
  • Choose efficient models
  • Cache repeated operations

Alerting

Set up notifications for:

  • Workflow failures
  • High error rates
  • Cost thresholds
  • Performance degradation