Lift tests are a great way to calibrate your MMM – but it has to be done well. You want your tests to give the MMM more information about the ground truth, which will help the model settle on the set of plausible parameters that are consistent with the results that you got from the test. This can be tricky because the vast majority of MMMs make the assumption that marketing performance doesn't change over time. If you have two different lift tests with two inconsistent results from one channel, which is very common because channel performance changes over time, it's not clear which of those lift tests you should use when you're calibrating your MMM results. What we do at Recast is, since we have a Bayesian time series model, we're estimating what the incrementality of every marketing channel is – for every day. That lines up really well with the way lift tests work because we can incorporate those results directly into the Bayesian statistical model by putting priors on the performance of that channel, but only at the time when the lift test was run. We're treating the evidence correctly by considering it a snapshot in time and not applying it to how that channel has performed over all of history. MMM and conversion lift studies don’t operate in silos, and we highly recommend using them together to get a more clear picture.
Recast’s Post
More Relevant Posts
-
What if the noise in your Marketing Mix Model held the most valuable insights? Dive into how Bayesian models can unlock the secrets hidden in long tails, revealing actionable data beyond averages. Javier Marin explores the untapped potential. #DataScience #DataAnalysis
To view or add a comment, sign in
-
This is an excellent post about model fixing. Results are often presented as, this is what the model recommended. In reality, however, the data scientist was forced to find the model with a predetermined conclusion. This is why a multi-model approach is necessary. Don’t just present one model results. Instead, present a leaderboard of models and focus on their predictive accuracy. #mmm #doMMMright
We use marketing mix models (MMM) or multi-touch attribution (MTA) to get valuable guidance for our media planning and budgeting. But when are we really 'data-driven' versus just creating the results we want to see? Many market solutions, particularly for MMM, only 'work' if we fix the model from the beginning. This happens through the use of strong priors (in so-called Bayesian models) or other constrained optimisation. In other words, our data-driven solution is not really data-driven anymore. Sometimes, small fixes can be useful and sensible. We can learn from existing theory and past studies, particularly when data is limited. But everything needs to be in balance. In other words: 1️⃣ How useful is a model that may 'go off track' unless there is a lot of manual adjustment? 2️⃣ If the model can only move in one direction, is the outcome already predetermined? Always ask your MMM provider, whether a vendor or in-house team, how much they intervened and set certain values to arrive at the results. Request more than one model (varying the number of 'fixed' parameters) to see whether the results hold. And if you're completely uncertain about a finding, test and confirm it with some experiment.
To view or add a comment, sign in
-
It’s interesting how marketers understand how saturation curves work in the real world, but sometimes media mix model builders really do not. And if you don’t get them right, your model just won’t work and it will vastly misguide how you allocate your budget. One thing that's different about the Recast approach is that we don't just assume that there is a curve and that it has some specific shape. Some MMMs require that an analyst chooses a curve for every marketing channel, assumes it's true, transforms the data, runs it through a linear regression, and gets results on the other side. The problem is that those results are super sensitive to the exact curve chosen. It's an easy way for the analyst to put their thumb on the scale and make certain channels look better than others just by “assuming” the channel’s saturation curve. Recast works differently. Under the hood, we’re a fully Bayesian statistical model. That means that the model starts with ranges for all the different shapes the curve can take and then the model itself is effectively a simulation engine – Recast runs millions of simulations to figure out which curves best fit the brand's data instead of just using intuition or “benchmarks”. The result of that simulation process is what you see in the Recast platform. You'll see a range of curves, some with less uncertainty (depending on how tightly correlated the data is), and others with more. Once you have these saturation curves, it's relatively straightforward to start thinking about how to adjust your marketing spend for optimal performance.
To view or add a comment, sign in
-
We use marketing mix models (MMM) or multi-touch attribution (MTA) to get valuable guidance for our media planning and budgeting. But when are we really 'data-driven' versus just creating the results we want to see? Many market solutions, particularly for MMM, only 'work' if we fix the model from the beginning. This happens through the use of strong priors (in so-called Bayesian models) or other constrained optimisation. In other words, our data-driven solution is not really data-driven anymore. Sometimes, small fixes can be useful and sensible. We can learn from existing theory and past studies, particularly when data is limited. But everything needs to be in balance. In other words: 1️⃣ How useful is a model that may 'go off track' unless there is a lot of manual adjustment? 2️⃣ If the model can only move in one direction, is the outcome already predetermined? Always ask your MMM provider, whether a vendor or in-house team, how much they intervened and set certain values to arrive at the results. Request more than one model (varying the number of 'fixed' parameters) to see whether the results hold. And if you're completely uncertain about a finding, test and confirm it with some experiment.
To view or add a comment, sign in
-
This assumes that there is a "truth" to discover... I wrote a little about that here: https://lnkd.in/gqzVDxsA The problem with a lot of these methods is that they are parameterising ever more precisely a model (i.e., a linear equation) that bears no relation to the underlying process. Just because you can get the maths to work, doesn't make the model correct. And there's no reference, so you can't be sure anyway. I agree that companies should run tests. But also bear in mind that the underlying process (the real world) is constantly changing so a test will never give you what you exactly what you predict, nor will the result of the test perfectly predict what will happen next time you do it. Being a bit more open about the uncertainty and the constant change would actually be liberating because it returns the objective to being just a bit better, and with a bit more understanding, and not obsessing over the 3rd decimal place of a number that's not "right" and that has already changed when you weren't looking.
We use marketing mix models (MMM) or multi-touch attribution (MTA) to get valuable guidance for our media planning and budgeting. But when are we really 'data-driven' versus just creating the results we want to see? Many market solutions, particularly for MMM, only 'work' if we fix the model from the beginning. This happens through the use of strong priors (in so-called Bayesian models) or other constrained optimisation. In other words, our data-driven solution is not really data-driven anymore. Sometimes, small fixes can be useful and sensible. We can learn from existing theory and past studies, particularly when data is limited. But everything needs to be in balance. In other words: 1️⃣ How useful is a model that may 'go off track' unless there is a lot of manual adjustment? 2️⃣ If the model can only move in one direction, is the outcome already predetermined? Always ask your MMM provider, whether a vendor or in-house team, how much they intervened and set certain values to arrive at the results. Request more than one model (varying the number of 'fixed' parameters) to see whether the results hold. And if you're completely uncertain about a finding, test and confirm it with some experiment.
To view or add a comment, sign in
-
Media mix modeling (MMM) should ideally be both a source of answers and questions. 1) As a source of answers: MMM should provide your marketing team with insights into what's likely to happen in the future based on all the marketing data you have, including current spend patterns, performance data from various channels, and the results of past experiments. 2) As a source of questions: MMM should drive you to think about how you can improve your understanding of marketing performance. If the model shows uncertainty about a particular channel, it should suggest you run an experiment to validate your assumptions and gain more certainty. The most sophisticated organizations see MMM and experimentation as a whole unified system, not as two separate things.
To view or add a comment, sign in
-
What gives media mix modelers a constant headache? Handling carry-over effects. Here’s how Recast handles it: When you build an MMM, you have to choose a start date (it doesn’t matter how many years you go back, you always have to start somewhere!). This means there will always be marketing activities from some period before the start date impacting your results after the start date. What happens if this isn’t addressed? The beginning of your data set will have an omitted variable problem and the parameters that lead to a good fit in the rest of your data will cause a bad fit at the beginning. This can mislead your model. At Recast, we handle this with a technique we call “burn in”. The way this is implemented technically is by excluding the first 60 days in the model from the Bayesian likelihood but still allowing marketing activity from the first 60 days to impact results after that period So, although the model is able to see the spend and the revenue from this period, the parameter estimates aren’t shaped by the fit during those first 60 days. However, conversions after those first 60 days are impacted by spend during the first 60 days, so it’s not like we’ve just moved the problem forward. This stops the model from over-fitting to conversions or revenue at the beginning of the time period that is influenced by things it can’t observe. If you’re considering working with an MMM vendor, it’s critical to ask them how they handle carry-over effects. If you want to learn more about Recast, check us out here: https://lnkd.in/e7BKrBf4
To view or add a comment, sign in
-
"We've built a marketing mix model but we have no idea if we can trust it or not." I hear this more often than you'd expect. It's a scary problem, but it’s a question that everyone using MMM should ask themselves. So, how do you know if you can trust the model’s results? One of the biggest problems in marketing mix modeling is overfitting. This happens because MMM models are so powerful that they can capture the noise along with the signal in the historical data it's trained on. It looks like it's picked up true causality on the training data, but it doesn't work when you feed it new, unseen data. Here's one of the ways we test for this: With backtesting, we train the model using historical data up to a point, and then we challenge it to predict the next three months, a period it hasn’t encountered. If the model predicts the performance in the holdout period, we have more evidence that the model has identified the underlying causal relationships. And that's how we can test if the model can accurately predict the future or if it might be overfitting. PS. This is a super high-level post. If you want more tactical details on this, I made a YouTube video where I go more in-depth — the link is in the comments. And if you want to learn more about Recast's model, check us out here: https://lnkd.in/e7BKrBf4
To view or add a comment, sign in
-
💡 Dive into our latest white paper on the 𝟯 𝗲𝘀𝘀𝗲𝗻𝘁𝗶𝗮𝗹 𝘁𝗿𝗲𝗻𝗱𝘀 in Marketing Mix Modeling that are making it an indispensable tool for marketers in 2024! 🌐 𝗛𝗼𝗹𝗶𝘀𝘁𝗶𝗰 𝗠𝗠𝗠 – As marketing channels diversify and complexities increase, understanding the full spectrum of marketing activities is crucial. Holistic MMM offers a comprehensive view, enabling marketers to optimize strategies across all touchpoints. 🔍 𝗕𝗮𝘆𝗲𝘀𝗶𝗮𝗻 𝗠𝗠𝗠 – Embrace the power of Bayesian MMM, which excels over traditional machine learning models by integrating prior knowledge and uncertainty, providing more accurate predictions and robust results, especially with limited data. 🛠️ 𝗦𝗲𝗹𝗳-𝗦𝗲𝗿𝘃𝗶𝗰𝗲 𝗠𝗠𝗠 – Discover the convenience and cost-effectiveness of self-service MMM platforms. These tools empower marketers to conduct their analyses, facilitating real-time decision-making and enhancing internal data intelligence. Want to discover more about this topic? Contact us to receive the full white paper! 📝
To view or add a comment, sign in
-
I often get asked "how do you know marketing is working?" My response is "are people looking at you?" There are two types of statistics I look at when evaluating a marketing campaign. Descriptive statistics help to present raw data in a form that is easy to understand, providing a clear picture of the dataset. Inferential statistics, on the other hand, goes beyond mere data description. It involves making inferences for hypothesis testing, making predictions, and generalizing findings. The question of whether descriptive or inferential statistics is better is not straightforward. It depends on the context and your goals. In regard to your marketing, both branches are crucial and complement each other. As a marketer (and broadcaster in a previous life), I can spin the stats, but there is always the most important question "Are people looking at you?"
To view or add a comment, sign in
2,166 followers