Skip to Content
InsightsTelemetry

Telemetry

When something breaks in a mobile app, the first question is always: was it the release or the flag? With separate tools for deployment and feature flags, you’re left guessing. AppDispatch owns both pipelines, so it can answer that automatically.

Crash spike detected

Runtime version: 49.0.0 · Flag: new-checkout = true · Channel: production · Affected devices: 4%

Instead of digging through logs to correlate a crash with a deploy, AppDispatch surfaces the exact flag variation, release version, and channel in one view. This is cross-dimensional attribution — every error is tagged with the device’s flag state and release version at the moment it occurred.

Overview metrics

The telemetry dashboard shows four summary metrics, weighted by flag variation:

MetricDescription
Devices trackedTotal devices weighted by active flag variations
Weighted error rateError rate across all devices, weighted by variation population size
Crash-free ratePercentage of sessions without native crashes, weighted the same way
Active issuesNumber of currently open correlated events

Error rate over time

An area chart showing error rate trends over 7, 14, or 30 days. Spikes are visually obvious and can be cross-referenced with the correlated events below.

Flag evaluations over time

A bar chart showing daily flag evaluation counts. Useful for spotting sudden drops (stale clients not polling) or spikes (new feature rollout driving evaluation volume).

Correlated events

AppDispatch automatically detects anomalies and attributes them to specific flag variations and release versions:

FieldDescription
Event typecrash_spike, error_spike, latency_spike, adoption_drop
Severitycritical, warning, info
Statusincident, degraded, healthy
Flag variationWhich flag + variation is correlated with the anomaly
Runtime versionWhich release version is correlated

Each correlated event tells you exactly what combination of code and configuration triggered the issue — so you can revert a single flag or roll back a release with confidence, not guesswork.

Flag impact matrix

A table that slices health metrics by flag variation, update version, and channel:

ColumnDescription
Flag / VariationWhich flag and which variation value
UpdateRuntime version the devices are running
ChannelWhich channel (production, staging, etc.)
DevicesNumber of devices in this slice
Error rateError rate with a delta badge showing change from baseline
Crash-freeCrash-free percentage (highlighted red if below 99%)

Filtering

Filter the matrix by:

  • Flag — Isolate a specific flag to compare its variations
  • Channel — Focus on production vs staging
  • Time range — 7, 14, or 30 days

Reading the matrix

The flag impact matrix answers questions like:

  • “Is the new-checkout = true variation crashing more than false?” — Compare error rates across rows for the same flag
  • “Did runtime version 49 introduce a regression?” — Compare rows with different runtime versions for the same flag state
  • “Is the issue specific to production or also on staging?” — Filter by channel

This is the surface that makes linked flags and rollout policies actionable — you’re not just deploying progressively, you’re measuring the impact of each variation at each stage.

Last updated on