Ricardo Vieira
Ricardo Vieira
### Description This adds a lot of complexity for little user benefit (most don't know about this functionality in the first place)
### Description https://pytensor.readthedocs.io/en/latest/tutorial/adding.html I think we should leave types discussion for later and emphasize the laziness until compilation instead. Could also mention the general `tensor` class
### Description A couple of switches should do the job, no need to implement a new Op https://numpy.org/doc/stable/reference/generated/numpy.nan_to_num.html
### Description Besides being way way faster, it would allow us to get rid of `setup.cfg` which AFAICT exists only because flake8 does not support `pyproject.toml`: https://github.com/PyCQA/flake8/issues/234. See #295 https://github.com/pymc-devs/pytensor/blob/main/setup.cfg
### Description The CI dependencies are completely dissociated from the conda pytensor-dev environment specified by `environment.yml`. This led to need a separate commit in #448 that should have gone into...
### Description This blogpost walks through the logic for 3 different examples: https://www.pymc-labs.com/blog-posts/jax-functions-in-pymc-3-quick-examples/ and shows the logic is always the same: 1. Wrap jitted forward pass in Op 2. Wrap...
### Description This Composite `Op` computes both `Max` and `Argmax` and is returned by default when you call `at.max(...)`. This makes the graphs unnecessarily more complex from the get-go (see...
I wonder whether it would be possible to rewrite the logp graphs to marginalize over finite discrete variables, indicated by the user (not necessarily all that are in the graph)....
```python import aesara import aesara.tensor as at from aeppl.transforms import TransformValuesRewrite, LogTransform x_rv = at.random.exponential() opt = TransformValuesRewrite({x_rv: LogTransform()}) logp, (x_vv,) = aeppl.joint_logprob(x_rv, extra_rewrites=opt) logp_fn = aesara.function([x_vv], logp, mode="FAST_COMPILE") aesara.dprint(logp_fn)...