pymc-marketing icon indicating copy to clipboard operation
pymc-marketing copied to clipboard

Allow user to express prior belief about total contribution of a given media channel

Open drbenvincent opened this issue 5 months ago • 2 comments

In MMM's we have many parameters. Here, let's focus on parameters associated with the saturation function and weighting. Let's just consider a generic saturation function, the contribution of channel $c$ at time $t$ is given by:

$$ contribution_{c,t} = \beta_c \cdot f(x_{c,t}, \theta_c) $$

where $\beta$ is a channel weighting and $\theta$ could be a parameter (or parameter list) associated with the saturation function $f$.

We might have business intelligence that tells us about a plausible range of channel contributions on average over a period of time. However at the moment this business knowledge can only be encoded by placing priors over $\beta$ and $\theta$. In many cases it may not be as easy to express business knowledge over these parameters.

So my proposal is to allow the used to express prior knowledge over the total contribution of a channel.

We could implement this by adding another likelihood term as follows:

# Calculate the total contribution 
contribution = pm.math.sum(beta*logistic(x, lambda))
# Define our prior over contribution in the form of an additional likelihood term
pm.Normal(mu=0.2, sigma=0.1, observed=contribution)

where mu=0.2 is your prior over the contribution of sales of that channel, and sigma is how confident you are about that.

The idea is that the user could still provide prior knowledge on $\beta$ and $\theta$, but if they really couldn't express their knowledge in that parameter space, then these could be left as relatively uninformative priors. At least at the start of an iterative process.

I don't yet have a suggested API at this point, but I think this could be a really nice addition. Allowing users to express priors that are closer to business intelligence could be a big win.


After proposing this issue, it was pointed out that it is similar to the more vague #939, and also #1038.

drbenvincent avatar Sep 23 '24 19:09 drbenvincent