Optimization

Reading AI Anomaly Explanations and Acting on Them

How to interpret Blueprint's 16 anomaly detectors, understand severity levels, and use Gemini-powered explanations to decide your next move.

Last updated: Mar 10, 2026 8 min read Optimization
TL;DR
  • Blueprint runs 16 parallel anomaly detectors using z-scores and moving averages against 14-day rolling baselines -- no machine learning, pure statistical analysis.
  • Severity levels (CRITICAL, HIGH, MEDIUM, LOW) are determined by z-score magnitude and directly map to color-coded badges in the UI.
  • The AI Explain feature (PRO-only, opt-in) sends insight context to Gemini 2.5 Flash, which returns structured JSON with a summary, recommendations, and per-insight explanations.
  • Use tab filters, severity sorting, and batch explanations to triage efficiently across hundreds of insights.

How Anomaly Detection Works

Blueprint's anomaly detection system is built on statistical analysis, not machine learning. Every time your ad platform data syncs, 16 parallel detectors analyze your spend, conversion, and performance data looking for deviations from expected behavior. Each detector operates independently, focusing on a specific aspect of your account health, from sudden CPA spikes to gradual Quality Score erosion.

The core mechanism behind most detectors is the z-score calculation against a 14-day rolling baseline. Blueprint computes the mean and standard deviation of each metric over the trailing 14 days, then measures how far today's value deviates from that baseline in terms of standard deviations. A z-score of 2.0 means the current value is two standard deviations from the norm -- statistically unusual enough to warrant attention. Some detectors use moving averages instead of or in addition to z-scores, particularly those that track trends over longer windows.

All of this analysis runs locally within Blueprint's background job queue. Your data never leaves the system for anomaly detection purposes. The detectors are deterministic -- given the same input data, they will always produce the same results. This is a deliberate design choice: statistical methods are transparent, auditable, and do not require training data or model tuning. You can always understand exactly why an anomaly was flagged by looking at the underlying numbers.

The 16 Detectors Explained

Blueprint's detectors fall into several logical groups. The metric anomaly group covers the core performance indicators: CPA spikes or drops, CTR deviations, unusual spend patterns, conversion volume changes, and ROAS fluctuations. These are the most commonly triggered detectors because they monitor the metrics you look at every day. Each one flags when the current value deviates significantly from the 14-day rolling baseline.

The pacing and budget group includes three detectors. The pacing alert detector watches your month-to-date spend trajectory against budget targets and flags when you are on track to significantly overshoot or undershoot. The budget capping detector identifies campaigns that have hit 95% or more of their daily budget for three or more consecutive days, which means you are likely losing impression share to budget constraints. The wasted campaign spend detector identifies campaigns where spend is flowing but conversions are absent or far below expected levels.

The quality and relevance group handles Quality Score degradation (flagging drops of 2 or more points), search term waste via n-gram analysis (identifying wasteful query patterns across your search terms), and keyword cannibalization (detecting when multiple keywords compete for the same traffic with a CPA gap greater than 50%). The trend and pattern group includes the trend detector for 3+ consecutive months of directional movement, the change impact detector that compares 7-day windows before and after significant changes, impression share erosion when you lose more than 10% over a period, and day-of-week outliers when a specific day's CPA is 2x or more the weekly average.

Finally, the strategic group covers geo targeting mismatches (spend flowing to locations outside your target areas), creative fatigue (declining performance on long-running ads), bid strategy misalignment (when your conversion volume does not meet the 30 or 50 conversion thresholds that automated bidding strategies need to optimize), and brand drift (when brand and non-brand performance metrics diverge for 3 or more consecutive months).

Understanding Severity Levels

Every anomaly Blueprint surfaces is assigned one of four severity levels: CRITICAL (red), HIGH (amber), MEDIUM (blue), and LOW (gray). These levels are determined primarily by the z-score magnitude of the deviation. A z-score above 3.0 typically triggers a CRITICAL severity because it represents a value more than three standard deviations from the mean -- something that would occur by chance less than 0.3% of the time. HIGH severity generally corresponds to z-scores between 2.5 and 3.0, MEDIUM to 2.0-2.5, and LOW to values that are notable but not extreme.

Some detectors apply additional context to severity assignment. For example, the budget capping detector escalates severity based on how many consecutive days the cap has been hit, not just the percentage. A campaign hitting its budget cap for 3 days might be MEDIUM, but 7+ consecutive days pushes it to CRITICAL because the impact on overall performance compounds over time. Similarly, the Quality Score degradation detector considers the absolute score after the drop -- losing 2 points from a QS of 9 to 7 is HIGH, but losing 2 points from 5 to 3 is CRITICAL because you are now in the penalty zone where CPCs increase substantially.

The severity badge appears prominently on every insight card in the dashboard, making it easy to visually scan for the most urgent issues. Blueprint defaults to sorting insights by severity (CRITICAL first) and then by recency, so the most important issues always appear at the top of your feed.

The AI Explain Feature

While Blueprint's statistical detectors tell you what happened, the AI Explain feature helps you understand why it might have happened and what to do about it. This is a PRO-only feature that is entirely opt-in -- Blueprint never sends your data to any external AI service unless you explicitly click the Explain button. When you do, Blueprint packages the insight context (the anomaly data, relevant metrics, account context, and platform type) and sends it to Gemini 2.5 Flash for analysis.

Gemini returns a structured JSON response containing three components: a plain-language summary of the anomaly, a set of actionable recommendations tailored to the specific ad platform, and per-insight explanations when multiple related anomalies are being analyzed together. The recommendations are platform-aware, meaning a CPA spike on Google Ads will generate different suggestions than the same spike on Meta Ads, because the available optimization levers differ between platforms.

AI Explain requests are rate-limited to 10 per hour per workspace to manage API costs and prevent abuse. Each explanation is cached with a 7-day TTL, so if you or a teammate requests an explanation for the same insight within that window, Blueprint serves the cached version instantly without making another API call. The explanation history is accessible from the History tab in the Insights panel, giving you a record of every AI-generated analysis for your workspace.

Batch Explaining Insights

When you have multiple anomalies to investigate, explaining them one at a time is inefficient. Blueprint supports batch explanations through a multi-select interface. Check the boxes next to the insights you want to analyze, and click the "Explain Selected (N)" button that appears in the toolbar. Blueprint bundles the selected insights into a single request, and Gemini analyzes them together, which often produces better explanations because it can identify connections between related anomalies.

For example, if you select a CPA spike, a CTR drop, and a Quality Score degradation that all occurred on the same campaign within the same timeframe, the batch explanation can connect these dots and explain that a Quality Score drop likely increased your CPCs, which raised CPA, while the CTR decline suggests your ad relevance decreased -- possibly due to a competitor entering the auction or a seasonal shift in search intent. This kind of cross-anomaly analysis is far more useful than three separate, isolated explanations.

Batch explanations still count against the 10 requests per hour rate limit, but as a single request rather than one per insight. This makes batch mode not only more insightful but also more efficient with your rate limit allocation.

Filtering and Triaging

The Insights panel provides four primary tab filters: All Insights, Anomalies, Recommendations, and History. The Anomalies tab shows only statistically detected anomalies, while Recommendations shows actionable suggestions that Blueprint derives from anomaly patterns. History shows all previously generated AI explanations, searchable by date and content. Switching between tabs instantly filters the insight feed without a page reload.

Below the tabs, a filter bar lets you narrow results by platform (Google Ads, Microsoft Ads, Meta Ads), specific ad account, severity level, and category. Category chips correspond to the detector groups described earlier: Metric, Pacing, Quality, Trend, and Strategic. You can combine multiple filters -- for example, showing only CRITICAL and HIGH severity anomalies from a specific Google Ads account in the Quality category. The active filter combination persists across sessions, so your preferred view is preserved when you return to the dashboard.

The default sort order is severity descending, then recency descending. This means CRITICAL anomalies always appear first, with the most recent CRITICAL items at the top. You can override this to sort by recency alone if you prefer a chronological view. The count badge on each tab shows how many items match the current filters, giving you a quick sense of volume before diving in.

Acting on Insights

The recommended workflow for acting on insights follows a triage-investigate-act pattern. Start each morning by opening the Insights panel filtered to CRITICAL and HIGH severity. Scan the list to understand the scope: are there one or two urgent issues, or a cluster of related anomalies that suggest a systemic problem? If you have PRO, use batch explain to get AI analysis on the critical items first.

For each CRITICAL insight, investigate the underlying data. Click through to the campaign or keyword view to see the full metric history. Compare the flagged period against the 14-day baseline that triggered the anomaly. Look for external factors that might explain the deviation: did you make changes to bids or budgets recently? Did a competitor launch a new campaign? Did seasonal demand shift? The AI explanation, if available, often surfaces these possibilities and saves you investigation time.

Once you understand the root cause, take action in your ad platform. Blueprint is a monitoring and analysis tool -- it surfaces the problems and explains them, but the actual changes happen in Google Ads, Microsoft Ads, or Meta Ads directly. Common actions include adjusting bids on keywords with CPA spikes, pausing campaigns with sustained wasted spend, adding negative keywords identified by the search term waste detector, and reallocating budget from capped campaigns to underspending ones. After making changes, monitor the change impact detector over the following 7 days to see whether your intervention had the desired effect.

Key Takeaways
  • 16 detectors run in parallel using z-scores and moving averages against 14-day baselines -- no ML, fully deterministic and auditable.
  • Severity levels (CRITICAL/HIGH/MEDIUM/LOW) are driven by z-score magnitude and contextual escalation rules.
  • AI Explain (PRO-only, opt-in) uses Gemini 2.5 Flash with 10 req/hour rate limit and 7-day cached history.
  • Batch explain lets you analyze multiple related anomalies in a single request for richer cross-anomaly context.
  • Follow a triage-investigate-act workflow: filter by severity, understand root causes, then make changes in your ad platform.
AI Insights
Explore the full AI Insights feature with anomaly detection and explanations
Quality Scores
Track keyword-level Quality Scores with historical trend analysis

Ready to let Blueprint surface your blind spots?

16 detectors, one dashboard, zero guesswork. Start with the Free tier -- upgrade to PRO for AI explanations.

No credit card required Free tier available Free Viewer seats for clients Cancel anytime