CRM Lead Validation Report — March 11, 2026
Status update on the automated validation pipeline for upsell CRM leads. Each lead is checked against fresh Weld API data and the marketing playbook before emails can be sent.
Pipeline Status
graph LR
V["Validated<br/>6 leads"] --> B["Blocked<br/>1 lead"]
B --> R["Remaining<br/>~94 leads"]
style V fill:#e8f5e9,stroke:#2e7d32,color:#1b5e20
style B fill:#fce4ec,stroke:#c62828,color:#b71c1c
style R fill:#e1f5fe,stroke:#1565c0,color:#0d47a1| # | Lead | Tier | Angle | Verdict | Key Issue |
|---|---|---|---|---|---|
| 1 | Freshly Picked | A | Budget Optimization | Validated | ROAS was 4.86x not 2.36x — tier changed from B to A, email rewritten congratulatory |
| 2 | Wondersauce | B | Budget Optimization | Validated | Tokens expired. Removed PMax-vs-Search comparison, added LTV question for Non-Brand |
| 3 | Route One | A | Budget Optimization | Validated | Removed DemGen vs Shopping comparison, corrected "Likely Agency" to Normal User |
| 4 | Long Point Digital | B | Budget Optimization | Validated | Explained WHY P-Max fails (Display/YouTube traffic), removed Brand Search comparison |
| 5 | Uncommon Insights | — | — | Blocked | No report exists, all tokens expired, email is 100% generic |
| 6 | Arab Gift Card | A | Budget Optimization | Validated | 4 playbook violations — problem-framed a 32.84x ROAS account, flagged $229 test campaign |
| 7 | ZoomLion | C | Tracking Investigation | Validated | Removed Search vs Awareness comparison (AWA campaign is intentional) |
Playbook Violations Found
The validation process caught recurring patterns of playbook non-compliance in auto-generated emails. These are the most common issues.
1. Tone Mismatch (Playbook Principle #1)
"Assess overall account health first. A 30x ROAS account doesn't need problem framing."
Arab Gift Card had 32.84x ROAS — one of the best accounts in the CRM — yet the email opened with "spotted something worth flagging." At 32.84x, the email should congratulate and then offer an interesting observation. The Demand Gen campaign (4,796 conversions at 2.14x) was flagged as a problem when it's likely an intentional volume play.
Freshly Picked had 4.86x ROAS (originally reported as 2.36x due to stale data), which changed the entire email tone from problem-framing to congratulatory.
2. Cross-Funnel Comparisons (Playbook Principle #2)
"Understand campaign types. Don't compare across types."
This was the most common violation:
| Lead | Violation |
|---|---|
| Wondersauce | Compared PMax to Non-Brand Search (different funnel stages) |
| Route One | Compared DemGen to Shopping/Brand (awareness vs purchase intent) |
| Long Point Digital | Used Brand Search as foil for P-Max finding |
| ZoomLion | Flagged Display/Awareness campaign (literally named "AWA") for 0 conversions |
graph TD
subgraph "Top of Funnel"
DG["Demand Gen / Display"]
AWA["Awareness (AWA)"]
end
subgraph "Mid Funnel"
PM["Performance Max"]
NB["Non-Brand Search"]
end
subgraph "Bottom of Funnel"
BR["Brand Search"]
SH["Shopping/Brand"]
end
DG -.->|"Don't compare across"| BR
AWA -.->|"Don't compare across"| SH
style DG fill:#e1f5fe,stroke:#1565c0
style AWA fill:#e1f5fe,stroke:#1565c0
style PM fill:#fff3e0,stroke:#e65100
style NB fill:#fff3e0,stroke:#e65100
style BR fill:#e8f5e9,stroke:#2e7d32
style SH fill:#e8f5e9,stroke:#2e7d323. Insignificant Hooks (Playbook Principle #3)
"Don't cite findings that affect less than 5% of total budget."
Arab Gift Card: The email flagged Sales-Shopping-March at $229 spend — just 1% of the $21.7K total budget. This is clearly test spend that the account manager already knows about.
4. Linear Scaling Language (Playbook Principle #4)
"Don't assume linear scaling. Never promise specific revenue outcomes from reallocation."
Found in 4 of 7 leads:
- "budget rebalancing suggestions" (Route One)
- "specific recommendations on budget reallocation" (Long Point)
- "specific rebalancing recommendations" (Arab Gift Card)
- Reports contained explicit claims like "Shifting £1,800 could improve ROAS to 14.5-15.0x"
5. Lead Gen ROAS Misinterpretation
ZoomLion showed 0.31x ROAS with 475 conversions. The auto-report treated this as broken tracking, but 475 conversions at ~$10 per conversion value is a classic lead gen pattern — the offline LTV is likely much higher. Campaign naming (CATARATA = cataracts, health/ophthalmology) confirmed this is a lead gen business.
Data Accuracy Issues
Stale ROAS Data
| Lead | CRM ROAS | Fresh ROAS | Difference |
|---|---|---|---|
| Freshly Picked | 2.36x | 4.86x | +106% — tier changed from B to A |
Only 1 of 7 leads had fresh data available. The rest had expired Weld tokens (most from Oct 2025).
Token Expiration
pie title Weld Token Status
"Expired" : 6
"Fresh Data Available" : 1All Google Ads OAuth tokens older than ~3 months return 500 from the Weld API. Meta tokens expire after 60 days. This means most CRM data is validated against report-time snapshots, not live data.
Classification Corrections
| Lead | Before | After | Reason |
|---|---|---|---|
| Route One | Likely Agency | Normal User | routeone.co.uk is the brand's own domain, Charlie Phillips is in-house |
Email Revision Patterns
Every validated email was rewritten. Common fixes applied:
- Added product feature framing — "We've been building an automated ad audit feature into GA Insights" (was missing from all original 5 leads)
- Removed cross-funnel comparisons — let findings stand on their own merits
- Removed "rebalancing" language — replaced with observations and questions
- Matched tone to account health — congratulatory for high ROAS, investigative for low ROAS
- Added "why" explanations — e.g., P-Max at $0.17 CPC means Display/YouTube traffic, not Shopping
What's Next
- ~94 leads remaining in the validation queue
- The
/validate-leadskill runs every 5 minutes via cron, processing one lead per cycle - Blocked leads (like Uncommon Insights) need fresh Weld tokens before they can proceed
- Reports with playbook violations in the HTML itself (e.g., linear scaling claims in recommendations) should be flagged for regeneration before sending