pymc icon indicating copy to clipboard operation
pymc copied to clipboard

Logp inference fails for exponentiated distributions

Open jessegrabowski opened this issue 7 months ago • 7 comments

Description

Suppose I want to make a log-normal "by hand":

def d(mu, sigma, size=None):
    return pt.exp(pm.Normal.dist(mu=mu, sigma=sigma, size=size))

with pm.Model() as m:
    y = pm.CustomDist('y', 0, 1, dist=d, observed=[0., 1.])

m.logp().eval() # array(nan)

This should return -inf, since 0 is outside the support of the (exponential) distribution. The same is true for any other support-constraining transformation.

Non-trivial use-case would be in a mixture model with components LogNormal and DiracDelta(0). This works fine, but if I want to explore a more fat-tailed distribution for the nonzero component (like LogStudentT), it fails.

jessegrabowski avatar Mar 10 '25 06:03 jessegrabowski