Adstock is often treated as a technical setting—the “carryover” of advertising from this week into the next.
In practice, it’s a business claim. It can change whether a channel clears your ROI hurdle rate—or fails it—without any change in actual performance.
It encodes a timing story: how long influence persists, whether impact builds before it fades, and where the peak response is allowed to show up. An MMM can look verified simply because that story fits the past.
The risk is that the story can fit—and still be wrong for your business.
Adstock is a story about timing
If you run a high-profile TV spot—say a Super Bowl creative—almost nobody expects the full effect to show up instantly. Some people notice it, some remember it later, some start searching in the following days, and some purchase weeks after exposure.
Adstock is your model’s way of deciding when impact is allowed to appear.
That timing depends on the business.
The purchase cycle changes what shapes you should allow
For expensive, high-consideration products, the peak effect often isn’t immediate.
If you sell vehicles, you’re not selling a $60,000 truck the moment someone sees a cool commercial. The near-term outcome is usually attention, consideration, search behavior, dealership visits—then purchase later.
In practice, it’s common to see peak impact six to eight weeks after a campaign launches, not the same week. For businesses like that, a delayed-peak adstock isn’t a modeling flourish. It’s closer to how the decision process works.
Now compare that to something like pizza.
If a pizza ad works, it tends to work now. People see it, they order, maybe they use a promotion the same day. Weeks later, it’s far less likely someone says: I remember that pizza ad from three weeks ago—let’s buy that tonight.
The broader point isn’t vehicles versus pizza. It’s this:
The more expensive, risky, or complex the product, the more time you should expect between exposure and purchase.
If your adstock assumptions don’t reflect that, the model can still fit. But it will be fitting a timing story your business doesn’t have.
Channel intent changes what “carryover” means
The same product can have different adstock dynamics depending on the channel.
Some channels are designed to work late in the journey. Point-of-purchase display is one example: the impression shows up at or near the decision moment, often on a retail site or in an environment where the user is already shopping. In that context, most of the response should happen immediately or within a short window.
Other channels are used earlier in the journey. TV and radio are often used for brand awareness, demand creation, or shifting perception. In that role, exposure does not translate into immediate purchase. The effect may show up later—through search, store visits, or conversions weeks after the creative ran.
This isn’t a critique of any channel. It’s a reminder that impact includes when it shows up—not just whether it exists.
A common failure mode: measuring the wrong quarter
Adstock matters because ROI is often calculated on the wrong timeline.
I’ve seen a case where a CMO wanted to demonstrate a simple claim: more advertising leads to more sales. Spend increased in Q4. The initial read looked disappointing—because the analysis stayed inside the same quarter and ignored carryover into the next one.
But a meaningful share of the response arrived in Q1, driven by the lag between exposure and purchase.
When carryover was included, the conclusion flipped. The measurement window—not the campaign—was the problem.
Adstock isn’t a nice-to-have. It can change whether spend is labeled profitable or wasteful.
For finance, this is a governance issue. If ROI depends on assumed carryover timing, then a model can meet a hurdle rate by assumption rather than by evidence. The risk isn’t simply “the model is wrong.” The risk is that budgets are reallocated with confidence the data did not actually earn.
Another failure mode: “fancier” adstock inflates ROI
The opposite problem can happen too.
Teams sometimes reach for delayed-peak adstock because it feels more sophisticated. It can look realistic. It can even improve fit.
But for channels built around immediate capture—like point-of-purchase display—a delayed carryover story is often not warranted. These impressions are reaching people at the decision moment, not weeks earlier. Most of the response should occur immediately or within a very short window.
If you apply a delayed-peak structure anyway, you can manufacture a longer tail of attributed impact. The model is then allowed to “find” conversions weeks later and assign them back to point-of-purchase display.
The practical effect is simple: ROI gets inflated without a defensible behavioral story.
And because the curve looks nuanced, it often escapes scrutiny.
The bigger failure mode: letting the model choose adstock by fit
Here is the most important point:
If you give a flexible MMM the freedom to choose among many adstock shapes, peaks, and half-lives, it will pick the one that best reconstructs history.
That sounds empirical. But it’s not the same as matching customer behavior.
In most MMM contexts there isn’t enough clean variation in the data to reliably learn the carryover shape. You have a time series with confounding, marketing simultaneity, seasonality, and business changes.
So when you “try a bunch of adstocks and keep the one with the best fit,” you’re not selecting truth. You’re selecting the timing story most compatible with everything else you allowed the model to do.
That is why adstock assumptions can drift across refits—and why a model can look stable while being structurally wrong.
What to demand before you trust the ROI
Adstock shouldn’t be outsourced to the model as a convenience.
It should be treated as a claim you are willing to defend.
A safer posture is to demand that each channel’s adstock can be justified in plain business terms:
- How long could influence plausibly persist in this category?
- Should the peak be immediate or delayed given the purchase cycle?
- What is the channel actually used for in this organization?
- If the model implies a long tail, what would have to be true operationally for that to make sense?
You can encode those beliefs as informed priors or constrained ranges—tight enough to stop the model from inventing an implausible timing story just because it improves fit.
And if the model pushes hard against your assumptions, that’s not an automatic win. It’s a diagnostic. Either the assumptions are wrong, or the model is using adstock to compensate for something else it can’t explain.
Experiments constrain what can be true
If you have experimental evidence—especially geo tests—use it to bound the carryover story.
When an MMM’s implied persistence conflicts with what experiments support, that isn’t a technical disagreement. It’s decision risk.
The uncomfortable ending
Adstock is one of the easiest places for MMM to become persuasive without being trustworthy.
Because adstock shapes are flexible, and historical fit is a weak referee, the model can “explain” the past using timing assumptions that don’t match how your customers buy.
So the question isn’t whether the curve fits.
The question is whether the timing story it implies is something your business could plausibly have.
The decision happens on schedule. The validation arrives later, if it arrives at all.