pytensor icon indicating copy to clipboard operation
pytensor copied to clipboard

PyTensor allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Results 395 pytensor issues
Sort by recently updated
recently updated
newest added

### Description Functions like `scipy.special.psi` now always upcast to float64, even if the input is a low precision integer like `int8`. We need to handle these types, but: 1) float64...

SciPy compatibility

### Description Brought up in #846 ```python import pytensor import pytensor.tensor as pt x = pt.vector("x", dtype="int64") out = pt.special.softmax(x) # Doesn't seem right out.dprint(print_type=True) # Softmax{axis=None} [id A] #...

bug
Op implementation

### Description There are two issues with the code generated by this snippet: ```python def update(x): return pt.exp(x) - 5 x_init = pt.vector("x_init", shape=(7,)) x_init_tangent = pt.vector("x_init_tangent", shape=(7,)) seq, updates...

graph rewriting
scan

## Description Try to include most of the functions from slinalg and nlinalg in the documenation ## Related Issue - [ ] Closes #854 ## Checklist - [ ] Checked...

### Description It's rather hard to find what functions are available in PyTensor (like do we have anything for linalg?). We should get sphinx to auto-populate most things with the...

docs

### Description Verify and update support for Numba Ops with the latest versions of Numba. There are some numba issues mentioned in comments that are now closed (as shown below)....

https://github.com/pymc-devs/pytensor/blob/d3bd1f15a497c05a979a8e3e8be40883f669a0b6/pytensor/link/jax/dispatch/elemwise.py#L72-L89 The JAX docs of lax.reshape (which np.reshape uses) suggest this may be better for further optimizations: https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.reshape.html#jax.lax.reshape Relevant part: > For inserting/removing dimensions of size 1, prefer using lax.squeeze...

beginner friendly
jax

### Description With static types it is possible to find out cases where we know the reshape is useless at compile time. We should remove it when that's the case....

graph rewriting

### Describe the issue: The ```Max``` op which is a subclass of ```CAReduce``` fails for 64 bit unsigned integers. This is also evident in the [PR 731](https://github.com/pymc-devs/pytensor/pull/731#discussion_r1582687763) and its [test](https://github.com/pymc-devs/pytensor/actions/runs/9067607709/job/24913191657?pr=731#step:6:9016)....

bug
C-backend

### Description Or have rewrites that handle this case. Otherwise it's hard / impossible to use Sparse variables in Blockwise/RandomVariables that have other batched inputs, but would otherwise work fine...

sparse variables