Background
At CABA, the marketing team was struggling with the flexibility and speed of our data visualization stack. Looker Studio, our go-to data visualization tool, had limited capabilities to meet our evolving need for advanced customizations. At the same time, the development team had limited bandwidth, and repeated back-and-forth around business logic was stalling our optimization efforts.
The Problem
Our existing setup couldn’t handle the complexity and speed we needed—and every custom report request meant the same cycle:
- Translate business requirements to the dev team
- Wait for development (days to weeks)
- Review, find gaps, go back to step 1
- Repeat
This made it difficult to explore new attribution questions or respond quickly to performance shifts. For a fast-moving DTC brand optimizing seven-figure monthly ad spend, this turnaround time was unacceptable. We needed reports that could evolve as fast as our questions did.
The Opportunity
Polar Analytics, our multi-touch attribution platform, launched an MCP (Model Context Protocol) integration—a programmatic API for querying raw attribution data. I saw an opportunity: what if I could bypass the dev queue entirely and build automated, self-updating dashboards myself?
I proposed taking ownership of the custom reporting pipeline. Instead of waiting on developers, I would leverage MCP to retrieve raw data, transform it to our exact specifications, and deliver production-ready dashboards that refreshed daily.
The Solution: A Lightweight Low-Code Reporting Pipeline
I built an end-to-end automation system using Polar MCP, n8n workflows, and Claude AI, designed to be fully owned by the marketing team:
- Rapid prototyping: MCP data samples are used to quickly scaffold initial report layouts with Claude, allowing early validation with real attribution data.
- ETL: n8n extracts data from the MCP endpoint, applies business rules (aggregation, deduplication, and grouping), and stores the transformed output.
- Low-maintenance data visualization: Once data structures are finalized, Claude is used to accelerate HTML layout generation. The resulting page is then served via an n8n endpoint and hydrated with daily-updated data.
This allowed new attribution reports to be prototyped, iterated on, and shipped in hours instead of weeks—without dev tickets or engineering dependencies.
Case Study: Custom Customer Journey Report
Our weekly review included a customer journey analysis—touchpoint sequences ranked by order volume. However, neither the native platform nor Looker Studio could deliver the level of insight we actually needed.
Reporting Requirements
- Last 30 days: orders, revenue, and conversion value share (% of total) per sequence
- Prior 30 days: the same metrics for comparison
- Period-over-period change: percent delta for each metric
- Sequence aggregation: collapse repeated same-channel touches (e.g., [Google Ads, Google Ads, Google Ads] → [Google Ads])
None of this was possible out of the box.
My Solution
I built a data transformation pipeline that modeled customer journeys in a way that aligned with how stakeholders reason about attribution:
- Pull raw touchpoint sequence data for both periods via MCP
- Apply aggregation logic to collapse redundant sequences
- Calculate conversion value share against total period revenue
- Compute period-over-period deltas
- Output a clean HTML table with all metrics presented side by side
The result was cleaner insights, clearer comparisons, and more actionable attribution discussions.
Impact
Beyond speed, this system created a repeatable pattern for building future custom attribution reports without expanding engineering scope.
This approach significantly reduced the time required to iterate on attribution analysis—from weeks to hours—while increasing confidence in the data presented during leadership reviews. More importantly, it enabled faster experimentation with attribution logic and removed recurring dependencies on engineering for analytics changes.


