P

Knowledge Pack Files

Performance Marketing Knowledge Pack Files

Browse the source files that power the Performance Marketing MCP server knowledge pack.

Available free v1.0.0 LLM
$ sidebutton install marketing
Download ZIP
analytics/references/attribution-models.md
7.8 KB

Attribution Models — Detailed Guide

Comparison of attribution approaches, implementation guidance, and decision frameworks for selecting the right model.

Model Comparison Matrix

ModelTypeData NeedsAccuracyCoverageBest For
Last clickRule-basedMinimalLowSingle channelSimple funnels, search-heavy
First clickRule-basedMinimalLowSingle channelAwareness measurement
LinearRule-basedTouch dataMedium-lowMulti-channelEqual-weight exploration
Time decayRule-basedTouch dataMediumMulti-channelLong sales cycles
Position-basedRule-basedTouch dataMediumMulti-channelBalanced view
Data-driven (DDA)Algorithmic300+ conv/moMedium-highMulti-channelSufficient data volume
Media Mix ModelingStatistical2+ years dataHigh (directional)All channels (inc. offline)Budget allocation
IncrementalityExperimentalTest designHighestPer-channelCausal measurement

Platform Attribution Defaults

PlatformDefault WindowClickView
Google Ads30-day clickYesYes (GDN/YouTube)
Meta Ads7-day click, 1-day viewYesYes
LinkedIn30-day click, 7-day viewYesYes
TikTok7-day click, 1-day viewYesYes
GA490-day (data-driven)YesNo (by default)

Why Platforms Over-Count

Each platform sees only its own touchpoints and claims credit for any conversion within its window:

User journey: LinkedIn ad (day 1) → Google search (day 5) → Meta retarget (day 6) → Purchase (day 7)

LinkedIn reports: 1 conversion (30-day click window)
Google reports:  1 conversion (30-day click window)
Meta reports:    1 conversion (7-day click window)
Backend truth:   1 actual conversion

Result: 3 reported conversions for 1 actual purchase. This is normal, not a tracking error.

Solution: Never sum platform-reported conversions. Use backend data as the denominator and platform data to understand relative contribution.

Data-Driven Attribution (DDA)

How It Works

  • Analyzes all converting and non-converting paths
  • Uses Shapley value or Markov chain to assign fractional credit
  • Accounts for the marginal contribution of each touchpoint
  • Requires sufficient data volume to build reliable models

Requirements

  • Google Ads DDA: 300+ conversions and 3,000+ ad interactions in 30 days
  • GA4 DDA: Active by default, quality depends on data volume
  • Meta: Not available as a formal model; uses modeled conversions

Limitations

  • Black box — you can't see the exact credit logic
  • Biased toward channels with more touchpoints (favors display over search)
  • Doesn't capture offline, word-of-mouth, or organic brand lift
  • Changes over time as user behavior changes

Media Mix Modeling (MMM)

What It Is

Statistical model that estimates the contribution of each marketing channel to an outcome (revenue, conversions) using historical aggregate data. Includes non-digital channels (TV, print, OOH) and external factors (seasonality, economy).

When to Use

  • $500K+/year ad spend across 3+ channels
  • 2+ years of historical spend and outcome data
  • Need to measure channels that aren't clickable (TV, OOH, podcast)
  • Budget allocation decisions across channels

Key Components

Revenue = Base + Channel_1_contribution + Channel_2_contribution + ... + Seasonality + External_factors + Error
  • Base: Revenue that would occur with zero marketing
  • Channel contributions: Each channel's estimated impact
  • Adstock/carryover: How long a channel's impact lasts after spend stops
  • Saturation: Diminishing returns as spend increases
  • External factors: Seasonality, promotions, economic indicators

Open-Source MMM Tools

ToolByLanguageStrengths
MeridianGooglePythonBayesian, incorporates geo-level data
RobynMetaRAutomated hyperparameter tuning, multi-objective
LightweightMMMGooglePythonBayesian, built on Numpyro
PyMC-MarketingPyMC LabsPythonFlexible Bayesian framework

Incrementality Testing Guide

Geo Lift Test

Setup:

  1. Select test markets (exposed to campaign) and control markets (no campaign)
  2. Markets must have similar baseline conversion rates
  3. Run for 2-4 weeks minimum
  4. Measure conversion lift: (test - control) / control

Design rules:

  • Minimum 5 test + 5 control markets for statistical power
  • Markets should be non-adjacent (prevent spillover)
  • Hold other marketing constant during test
  • Run pre-test period to establish baseline

Conversion Lift Test (Platform-Native)

Meta Conversion Lift:

  • Meta splits eligible audience into test (see ads) and holdout (don't see ads)
  • Measures incremental conversions caused by the campaign
  • Requires: sufficient budget, Pixel/CAPI, 1-4 week minimum
  • Limitations: only measures Meta's contribution, not cross-channel

Google Conversion Lift:

  • Similar holdout methodology for YouTube and Display
  • Available through Google Ads representative
  • Requires significant spend ($50K+ recommended)

On/Off Test

The simplest incrementality test:

  1. Pause a channel completely for 2-4 weeks
  2. Measure impact on total conversions (all channels)
  3. Restart and measure recovery

When to use: When you suspect a channel is getting attribution credit it doesn't deserve. If pausing search retargeting has zero impact on total conversions, it wasn't incremental.

Limitations: Blunt instrument. Doesn't tell you the optimal spend level, just whether the channel matters at all.

Attribution Decision Framework

Q: What decision are you making?

A1: Which channel gets next $10K of budget?
    → Use: Marginal CPA analysis + incrementality test

A2: How should we split annual budget across channels?
    → Use: MMM (if data exists) or historical ROAS + incrementality

A3: Which campaign within a channel should I scale?
    → Use: Platform DDA + backend conversion reconciliation

A4: Is this new channel worth testing?
    → Use: Geo lift test or on/off test after initial pilot

A5: Should we cut this underperforming channel?
    → Use: On/off test to measure true incremental impact before cutting

Measurement Maturity Model

LevelDescriptionTypical Setup
1 — BasicPlatform-reported metrics onlyEach platform tells its own story, no reconciliation
2 — TrackingCentralized analytics + UTMsGA4 as source of truth, consistent UTM taxonomy
3 — AttributionMulti-touch attribution, backend reconciliationMTA tool or DDA, platform data reconciled with CRM
4 — IncrementalityRegular lift tests, experimental measurementQuarterly incrementality tests per major channel
5 — ModelingMMM + incrementality + platform data, triangulatedAll three lenses informing budget allocation

Most accounts should target Level 3 and run periodic Level 4 tests. Level 5 requires dedicated data science resources and $1M+ annual spend to justify the investment.

UTM Taxonomy

Consistent UTM parameters are the foundation of cross-channel measurement.

utm_source   = platform name         (google, meta, linkedin, tiktok)
utm_medium   = traffic type           (cpc, cpm, paid-social, email, organic)
utm_campaign = campaign name          (must match campaign naming convention)
utm_content  = ad set / ad group      (audience or creative identifier)
utm_term     = keyword or targeting    (search term or audience segment)

Rules:

  • Lowercase only. Googlegoogle in analytics.
  • No spaces. Use hyphens or underscores.
  • Be consistent across all channels and all team members.
  • Use a UTM builder template, not manual entry.
  • Validate UTMs fire correctly before scaling spend.