Jesse Grabowski

Results 42 issues of Jesse Grabowski

This PR provides numba overloads for functions in the scipy linalg library, including: `linalg.schur` `linalg.qz` `linalg.ordqz` `linalg.solve_discrete_lyapunov` `linalg.solve_continuous_lyapunov` The implementations are modeled on the `numba.np.linalg` implementations, although without special cython...

Following our discussions in #1015 and #1011, this PR adds Ops to `aesara.tensor.slinalg` that wrap `scipy.linalg.solve_discrete_lyapunov` and `scipy.linalg.solve_continuous_lyapunov`, as well as compute the reverse-mode gradients for each. One note, I...

SciPy compatibility
tensor algebra
Op implementation

Hello, I am interested in adapting the refitting wrappers to implement LFO-CV, as described by [Bürkner, Gabry, and Vehtari (2020)](https://www.tandfonline.com/doi/pdf/10.1080/00949655.2020.1783262), with the goal of cleaning everything up, writing unit tests,...

### Before ```python def scaled_mv_normal(mu, cov, R): return pm.MvNormal.dist(mu=R @ mu, cov= R @ cov @ R.T) ``` ### After ```python def scaled_mv_normal(mu, cov, R): return R @ pm.MvNormal.dist(mu=mu, cov=cov)...

feature request
logprob

**What is this PR about?** Closes #7041 This is a draft to add convergence checks to the JAX samplers. Right now I'm just calling `run_convergence_checks` after sampling. It might be...

### Description Our canonical change point example model now emits a warning about invalid casting: ```python import pandas as pd import pymc as pm disaster_data = pd.Series( [4, 5, 4,...

request discussion
maintenance
model

### Describe the issue: Shapes aren't being correct set on variables when using `coords` in JAX. I guess this is a consequence of coords being mutable by default, and could...

jax
samplers

### Describe the issue: `pm.sample_posterior_predictive` silently fails when `extend_inferencedata = True` and `posterior_predictive` group exists ### Reproduceable code example: ```python import pymc as pm import numpy as np y =...

bug
help wanted
trace-backend

I'm quite interested in the results of [this paper](https://arxiv.org/pdf/2303.16846.pdf). The authors derive closed-form gradients for backprop through Kalman Filters. Specifically equations 28-31. They report a 38x speedup over autodiff gradients...

enhancements
feature request
statespace

#257 added support for marginalization of order 1 HMMs, but it would be nice to be able to also unmarginalize. I think we want something like the Viterbi algorithm?