Telemetry
When something breaks in a mobile app, the first question is always: was it the release or the flag? With separate tools for deployment and feature flags, you’re left guessing. AppDispatch owns both pipelines, so it can answer that automatically.
Crash spike detected
Runtime version:
49.0.0· Flag:new-checkout = true· Channel: production · Affected devices: 4%
Instead of digging through logs to correlate a crash with a deploy, AppDispatch surfaces the exact flag variation, release version, and channel in one view. This is cross-dimensional attribution — every error is tagged with the device’s flag state and release version at the moment it occurred.
Overview metrics
The telemetry dashboard shows four summary metrics, weighted by flag variation:
| Metric | Description |
|---|---|
| Devices tracked | Total devices weighted by active flag variations |
| Weighted error rate | Error rate across all devices, weighted by variation population size |
| Crash-free rate | Percentage of sessions without native crashes, weighted the same way |
| Active issues | Number of currently open correlated events |
Error rate over time
An area chart showing error rate trends over 7, 14, or 30 days. Spikes are visually obvious and can be cross-referenced with the correlated events below.
Flag evaluations over time
A bar chart showing daily flag evaluation counts. Useful for spotting sudden drops (stale clients not polling) or spikes (new feature rollout driving evaluation volume).
Correlated events
AppDispatch automatically detects anomalies and attributes them to specific flag variations and release versions:
| Field | Description |
|---|---|
| Event type | crash_spike, error_spike, latency_spike, adoption_drop |
| Severity | critical, warning, info |
| Status | incident, degraded, healthy |
| Flag variation | Which flag + variation is correlated with the anomaly |
| Runtime version | Which release version is correlated |
Each correlated event tells you exactly what combination of code and configuration triggered the issue — so you can revert a single flag or roll back a release with confidence, not guesswork.
Flag impact matrix
A table that slices health metrics by flag variation, update version, and channel:
| Column | Description |
|---|---|
| Flag / Variation | Which flag and which variation value |
| Update | Runtime version the devices are running |
| Channel | Which channel (production, staging, etc.) |
| Devices | Number of devices in this slice |
| Error rate | Error rate with a delta badge showing change from baseline |
| Crash-free | Crash-free percentage (highlighted red if below 99%) |
Filtering
Filter the matrix by:
- Flag — Isolate a specific flag to compare its variations
- Channel — Focus on production vs staging
- Time range — 7, 14, or 30 days
Reading the matrix
The flag impact matrix answers questions like:
- “Is the
new-checkout = truevariation crashing more thanfalse?” — Compare error rates across rows for the same flag - “Did runtime version 49 introduce a regression?” — Compare rows with different runtime versions for the same flag state
- “Is the issue specific to production or also on staging?” — Filter by channel
This is the surface that makes linked flags and rollout policies actionable — you’re not just deploying progressively, you’re measuring the impact of each variation at each stage.