THE PROBLEM
Strategists were writing Monday-morning commentary instead of running media plans.
Every Monday, the senior bench spent the morning writing the same shape of commentary for 18 retained clients: what happened last week, why, what we're changing. ~2 senior days a week, every week.
The commentary was the most-read part of the agency's output, but it was the part the senior team had the least time to make sharp.
THE APPROACH
A narrative engine that reads platforms the way a strategist would.
Phase 1: connect every client's ad platform data into BigQuery on a daily cadence. Standardise the metrics that matter (effective CPM, frequency, qualified lead rate by source).
Phase 2: per-client tone profiles in Notion — some clients want hedged language, others want bullet points. The model writes in the client's preferred register.
Phase 3: a 4-paragraph commentary draft per account every Monday at 7am: what changed, why, what's notable, what to do about it. Strategists open the draft, edit the wording, hit send.
First two Mondays the draft sounded too confident — it would assert a cause for a CPL spike without enough evidence. We added a 'flag-if-low-signal' rule so weeks without a clear explanation now ship as 'awkward — needs strategist call' instead of fake certainty.
WHAT WAS MESSY
Where the first version of the workflow failed.
Tone calibration was per-client and tedious — 18 accounts × 2 calibration rounds before strategists trusted the drafts.
Anomaly detector over-flagged in the first week (every minor fluctuation got tagged). We tuned the rules and added a 'don't flag if within the prior 4-week range' guard.
Two clients didn't want auto-drafts at all (regulated category, ultra-conservative tone). We kept those manual and didn't push it.
THE OUTCOME
Mondays are no longer write-up days. Client clarity moved.
- Weekly commentary writing hours16 h → 3 h−81%
- Accounts with weekly auto-draft0 → 18100%
- Client-reported clarity score+24%n=18
- Strategist Monday throughput+62%self-reported
HOW WE MEASURED IT
Baseline, sample and method — so the numbers above are checkable.
Baseline: 4 weeks of pre-pilot commentary (18 accounts × 4 Mondays = 72 write-ups), measured from strategist timesheets.
Pilot: 4 weeks of engine-drafted commentary across the same 18 accounts.
Time-saved figure is strategist write-up time only — strategist review/edit time on the new workflow is included in the 3-hour number.
Clarity score: a 1–5 question added to the weekly client follow-up email. +24% is the absolute delta versus the 4-week baseline (n=18 accounts, response rate ~58%). Treat as directional, not a controlled study.
WHAT WE DID NOT AUTOMATE
Where the human stayed in the loop on purpose.
Strategists own every send. The engine drafts; strategists edit and approve before anything reaches a client.
We did not train models on client data. Drafts run through enterprise model endpoints with no training feedback.
Awkward-week interpretation stays human — the engine flags low-signal weeks instead of guessing causes.
Two regulated-category accounts stayed fully manual at the client's request. The engine handled the other 16.
WHAT'S NEXT
The same engine is being extended to ad-hoc 'why did X spike' deep-dives.
Clients ask 'why did our CPL jump last week' on average 4 times per month. Each answer used to take an hour. The engine now drafts the explanation against the data and the strategist reviews it.
The agency is positioning this as 'always-on strategist commentary' in new business — without staffing the always-on part.