Impact analysis compares metrics before and after an event to help you understand whether a deployment, rollback, or other change made things better or worse.
When querying impact, you can specify the comparison window size:
Here's what impact analysis looks like in practice. Suppose api-service v2.5.1 was deployed to production at 2:45 PM. OpsTrails compares metrics in a 2-hour window before and after the event:
| Metric | Before (12:45–2:45 PM) | After (2:45–4:45 PM) | Change |
|---|---|---|---|
error_rate | 0.12% | 2.41% | +2.29% (20x increase) |
p95_latency | 180ms | 420ms | +240ms (2.3x slower) |
throughput | 1,200 req/s | 1,180 req/s | -1.7% (stable) |
Interpretation: The error rate and latency both jumped significantly after the deployment, while throughput remained stable. This pattern suggests the deployment introduced a bug that causes errors and slower responses, but hasn't affected overall traffic. This deployment is a strong candidate for rollback.
The most useful metrics depend on what you're investigating:
error_rate and crash_free_rate tell you if the change introduced bugs. Most useful metric for assessing deployment impact. Sentry provides crash_free_rate; Datadog and New Relic provide error_rate.p50_latency, p95_latency, and p99_latency reveal if the change slowed things down. A spike in p99 with stable p50 often indicates an edge case affecting a subset of requests.page_views and throughput show whether traffic is being served. A sudden drop in page views or throughput can indicate a service outage. Google Analytics provides page_views; Datadog and New Relic provide throughput.When an AI assistant is connected to OpsTrails via MCP, it uses the get_metrics_around_event tool to perform impact analysis automatically. This is the same tool that powers the before/after comparisons shown above. See the MCP Tools Reference for full tool documentation.
Example AI queries that trigger impact analysis:
The AI will call query_events to find the relevant event, then get_metrics_around_event with the event's timestamp and subject to retrieve the before/after metric comparison.
When connected via MCP, AI assistants automatically use the get_metrics_around_event tool to assess impact. You can ask:
✅ Tip
Common metrics used for impact analysis:
error_rate — Percentage of requests resulting in errorsp50_latency — Median response timep99_latency — 99th percentile response timepage_views — Total page views from analyticscrash_free_rate — Percentage of sessions without crashesthroughput — Requests per second