Impact Analysis
Impact analysis compares metrics before and after an event to help you understand whether a deployment, rollback, or other change made things better or worse.
How It Works
- Event occurs — A deployment, rollback, or other change is recorded on the timeline
- Metrics are collected — Connected analytics providers continuously report metrics (error rates, response times, page views, etc.)
- Before/after comparison — OpsTrails compares metric values in a configurable window before and after the event
- Impact assessment — Significant changes are flagged, helping you correlate deployments with metric movements
Comparison Windows
When querying impact, you can specify the comparison window size:
- 1h — Tight comparison for quick-impact changes
- 2h — Default window, good for most deployments
- 4h — Wider view for gradual impacts
- 6h, 12h, 24h — Broader windows for slow-rolling changes
Using with AI
When connected via MCP, AI assistants automatically use the get_metrics_around_event tool to assess impact. You can ask:
- “Did the error rate spike after the last deployment?”
- “How did page load times change after the v2.1.0 release?”
- “Compare metrics before and after the rollback”
✅ Tip
For the most accurate impact analysis, make sure your analytics provider metrics are mapped to the same subjects (environments) as your events.
Example Metrics
Common metrics used for impact analysis:
error_rate — Percentage of requests resulting in errorsp50_latency — Median response timep99_latency — 99th percentile response timepage_views — Total page views from analyticscrash_free_rate — Percentage of sessions without crashesthroughput — Requests per second