How To Combine Attribution Data With Media Mix Modeling

Marketing measurement is changing; privacy policies are forcing companies away from deterministic last-touch measurement and a variety of alternatives — SKAN, Google Privacy Sandbox, Modeled SAN outputs — are emerging. Aside from the complexities introduced by these changes, I’ve noticed some companies take advantage of this shift by combining traditionally separate measurement methods. The analysis method of media mix modeling (MMM) has been around since the 1960s but has historically remained distinct from more modern digital deterministic measurement methods. However, now that marketing teams are forced to reconcile multiple sources of measurement to determine what to do, there is an opportunity to include MMM with the more granular attribution methods. I recently sat down with Michael Kaminsky, co-CEO and co-founder of Recast, to chat about how companies are combining holistic statistically-driven models like MMM with attribution data.

— Adam Landis, Head of Growth at Branch


Michael, thanks for joining us today. To provide some context, can you give us some background of what you folks are up to at Recast?

Absolutely! Recast is a modern media mix modeling (MMM) platform focused on speed and verifiability. We combine best-in-class statistical modeling and modern machine learning methods so companies can develop MMMs faster, more flexibly, and more accurately. This is a new paradigm that pulls MMMs out of PowerPoint decks and puts powerful tools into the hands of marketers. They can plan, optimize budgets, and verify model accuracy — all on an ongoing basis. In some ways, we think about what we’re doing as helping to “bridge the gap” between last-generation MMM analyses that only happened a few times a year with always-on digital tracking attribution methods.

For those who don’t know, what is MMM?

MMM generally stands for “media mix modeling,” or sometimes “marketing mix modeling.” The idea is to construct a top-down statistical or econometric model that statistically links marketing activity to business outcomes (normally something like sales or conversions). MMMs work by looking at a time series of historical marketing data, and then identifying patterns in the data. So the model might try to answer a question like, “When we spend relatively more on Facebook ads, controlling for our other marketing activity, how many additional sales do we drive?” Good media mix models need to account for all of the complexity associated with marketing in practice (e.g., time shifts or adstocks, seasonality, diminishing marginal returns, etc.), so good models tend to be quite complex.

Check out our introductory blog post on MMM for more details.

This might be a dumb question, but I hear incrementality often combined with MMM. How does incrementality relate to MMM outputs?

A properly-tuned MMM should provide marketers with a measure of incrementality. The word “incrementality” is just marketer speak for “causality.” When we talk about incrementality, we are really talking about the true causal impact of our marketing activity. If we refer to the incrementality of “branded search” advertising, what we really mean is, “How many conversions would not happen if not for our branded search advertising?” The idea is that some customers might click on a branded search ad who were going to purchase anyway, and so the spend on that branded search click isn’t “incremental” — since the conversion wasn’t caused by the search click. We elaborate more on the topic of branded search incrementality in this blog post.

Incrementality is the most important concept in marketing measurement because incrementality tells us the true return on investment of our marketing activity and is how we can actually optimize our marketing budgets. Media mix models, when built correctly, should provide results that are estimates of incrementality. These can be used to drive budget optimization and should be consistent with experimental results that also attempt to measure incrementality.

That’s interesting. I didn’t know the statistical outcomes of an MMM would provide estimated lift for an individual channel. I know this will probably be a huge “it depends,” but can you talk about accuracy: both how accurate these models are and the factors that go into the variability?

You are right that it’s a huge “it depends.” When done well, MMM models can be highly accurate, and their lift estimates can be corroborated by other experimental and quasi-experimental results (e.g., geo-holdout tests, randomized controlled trials, go-dark tests, interrupted time series, etc.). However, when done poorly, these models can be incredibly inaccurate and can be actually misleading to the point where it’s better not to use an MMM at all.

The accuracy of an MMM model depends on exactly how the model is specified statistically and how well that specification matches a particular business. Simpler specifications will tend to be more biased (i.e., less accurate) than more complex specifications. We like to say that running an MMM is trivially easy, but running a good MMM that is actually accurate and can be validated is incredibly difficult.

Does an MMM model need to be holistic to be valuable? Or can you apply MMM on a specific channel?

In general, media mix models only work if you can include all marketing activity in the model. If you attempt to build an MMM with only a subset of marketing channels included, you risk over-crediting the included channels. So, for example, it could be the case that TV ads drive a lot of your sales, but if you don’t include TV marketing in your MMM model, those conversions driven by TV could end up being credited to Facebook.

There are other types of models you can build to look at the impact of a change in marketing activity for a single marketing channel (e.g., via an interrupted time series design), but in general, an MMM needs to see all of your marketing activity in order to work.

How is MMM employed by a brand versus a more traditional, channel-specific attribution output like last-touch? 

The most sophisticated marketers know that they should use the right tool for the job and that means using different measurement methods in different contexts. Digital tracking methods like MTA or first- and last-touch attribution are valuable but are mostly used for day-to-day (or hour-to-hour) channel management. They’re the tools that individual channel managers use to track, manage, and optimize their channel day in and day out.

However, those same digital tracking methodologies are not generally able to measure incrementality and can’t be relied upon for forecasting or cross-channel budget allocation. So that’s where a tool like MMM comes in: It helps marketing leaders measure marketing impacts more holistically and allows them to allocate budget across different marketing channels because it can compare those channels on an apples-to-apples basis.

So while channel-specific attribution is used operationally, MMM is generally used from a macroscopic impact perspective. Can the output of channel-specific or deterministic attribution help refine MMM models?

Many people want to build hybrid models that combine data from digital tracking methods with a top-down MMM model. I generally think this is a bad idea, or at the very least you want to be very, very careful with how you implement it.

The reason why it’s not a good idea is because digital tracking methods tend to be inherently biased towards the channels that are 1) at the bottom of the funnel and 2) easiest to track. One of the main reasons to use an MMM is to give you an (ideally) unbiased view of how your marketing channels are performing, so you don’t want to pollute your model with biased results from your digital tracking system!

So biased inputs will reduce the model’s effectiveness. What about the reverse? Attribution is shifting with multiple sources of data. Can the MMM output help with attribution data?

Yes! Sophisticated marketing teams do have a process that “triangulates” the results from different measurement methods. For example, they might build a spreadsheet that lines up the results from the MMM with last-touch and first-touch attribution as well as platform metrics in order to get a sense for where the different methods agree (and disagree), which they can then use for operational decision-making.

One easy example of this is creating incrementality “coefficients” that channel managers can use operationally. That might look something like, “We know we need to multiply the in-platform CAC for Facebook by 1.25 to hit our true incremental CAC-LTV payback targets.”

I’ve seen this too. A macro view of unbiased holistic reporting can be used to create coefficients to track granular, more continually available reporting. Essentially, “We always know that Facebook channels report 25% higher, so we can discount ROAS on those campaigns by 25% until we test again.” 

Thank you for your time, Michael. This was a very enlightening look at how brands are combining these traditionally separate methodologies of measurement. We’ll definitely have to do a follow-up webinar. In the meantime how can folks get in touch with you?

You can follow me on LinkedIn for my writing on marketing effectiveness, and make sure to check out Recast if you want a closer look at a modern MMM platform.

To learn more about Branch’s attribution solutions, request a demo with our team.