THE PROBLEM
Feedback was scattered, repeated, and missing the parts that mattered.
Multi-market campaigns went through 6–10 reviewers across legal, brand, market leads, and clients. Feedback lived in Frame.io timeline notes, email threads, Slack DMs, and review-call recordings.
Routine notes (typo, logo placement, missing disclaimer) were rewritten by humans every round. Critical taste feedback often got lost in the noise.
THE APPROACH
An approval workflow that handles the boring 80% and surfaces the 20% that needs a human.
Phase 1: consolidated feedback channels into a single Notion log per asset — Frame.io comments, Slack messages, email replies all stream in tagged by reviewer + timestamp.
Phase 2: a compliance pass that flags brand-guideline violations, missing disclaimers, format/spec mismatches before assets reach senior reviewers. Flagging only — final legal and brand sign-off stayed human-owned.
Phase 3: a routing layer that decides which reviewer needs to see which version — taste calls go up, routine fixes route to the post team with a one-click approval gate.
We were worried the AI would over-flag and waste post-team time. First week it did exactly that — too many false positives on disclaimer placement. After a tuning pass it stabilised, but we still have a human spot-check on every legal flag before it goes to post.
WHAT WAS MESSY
Where the first version of the workflow failed.
The brand-compliance vision pass over-flagged in week one — too many false positives on logo placement. We tuned the prompt with examples from approved assets and dropped false-positive rate by ~70%.
Some legacy assets weren't in the asset API. We added a manual upload path for those, which means about 10% of routine fixes still require a human to drop the file in.
Senior reviewers initially didn't trust the routing — they wanted to see every round. We added a 'recent decisions' digest so they could audit the routing without re-reviewing every asset.
THE OUTCOME
Faster throughput, fewer rounds, and a searchable history of every decision.
- Approval rounds per asset8 avg → 3 avg−63%
- Assets shipped per week12 → 29+2.4×
- Senior reviewer hours per campaign−58%n=47
- Routine issues caught before delivery−92%directional
HOW WE MEASURED IT
Baseline, sample and method — so the numbers above are checkable.
Baseline: 12 multi-market assets shipped through the old process (Frame.io audit + scheduler logs across the 3 months prior).
Pilot: 47 multi-market assets shipped through the new workflow over 6 weeks.
Approval-rounds figure counts version uploads in Frame.io per final-delivered asset.
Routine-flag-rate is the share of disclaimer/format/brand issues caught before final delivery, measured against the same issues found in the baseline cohort after delivery. Small sample — treat as directional.
WHAT WE DID NOT AUTOMATE
Where the human stayed in the loop on purpose.
Legal sign-off stayed human-owned end-to-end. The system flags potential legal issues; it never approves an asset.
Final brand sign-off stayed with the brand lead. The system surfaces guideline mismatches; the brand lead decides.
Creative direction and taste calls remained fully senior-driven. The engine routes the boring stuff; humans run the actual review.
No client-facing send happened without a senior on the asset.
WHAT'S NEXT
The approval log is now the agency's institutional memory.
New hires used to take 4–6 weeks to internalise 'how we review here'. With the structured log, they're shadowing decisions on day one and seeing the rationale, not just the outcome.
The studio is adding a similar layer for music and licensing clearance — same pattern, different rulebook.