Analytics·7 min read

Why your attribution model is lying to you

Last-click, first-click, linear, data-driven — none of them tell you what is actually working. Here is a more honest approach.

Marketing attribution is the closest thing our industry has to astrology. The models look sophisticated, the dashboards look convincing, and the conclusions sound precise. They are also, almost always, wrong.

The core problem

Every attribution model tries to answer a question that is fundamentally unanswerable with click data alone: "What would have happened if this user had not seen that ad?" The only honest answer is "We do not know, because we cannot observe the counterfactual." Attribution models paper over this with assumptions — and those assumptions quietly dictate your conclusions.

Last-click says the final touchpoint deserves all the credit. First-click says the first. Linear says everyone gets equal credit. Data-driven models use ML to assign weights, which sounds objective but is really "weights that correlate with conversion in our training window." None of these tell you causation.

What to measure instead

The companies we see doing this well stopped asking "what channel gets the credit?" and started asking three different questions:

  1. Incrementality. Run holdouts. Randomly exclude 10% of your audience from a campaign and compare their behaviour. This tells you what the campaign actually caused, not what correlates with it.
  2. Marketing mix modelling at the strategic level. Monthly, not daily. Use it to decide channel budgets, not to optimise individual campaigns. MMM is a blunt instrument for strategic questions and a disaster for tactical ones.
  3. Survey-based attribution. "How did you hear about us?" in the post-purchase survey, asked consistently over years, is directionally more honest than any tracking pixel. It has its own biases but they are less destructive than pixel-based ones.

The organisational problem

The deeper issue is organisational. Attribution models exist because someone needs to defend a budget. Channel owners want their channel to look good; CMOs want aggregate ROI to look good; finance wants a number. A model gets adopted because it produces politically-acceptable numbers, not because it is accurate.

The way out is to decouple measurement from compensation. Judge teams on the strategic decisions they make with incomplete data, not on the accuracy of an attribution model they had no hand in building. Make incrementality testing a quarterly discipline, not a one-off experiment. Treat the attribution dashboard as a rough compass, not a ruler.

A practical starting point

This quarter, pick your biggest-budget channel and run one clean holdout test. If the holdout group converts at 80% of the exposed group's rate, the channel is driving 20% incremental lift — not the 100% your last-click model is telling you. That single number will reshape your thinking faster than any model overhaul.

Stop optimising your attribution logic. Start measuring what actually moves.


Keep reading

More from the blog.

Enjoyed this?

Get our best writing, monthly.