pytensor icon indicating copy to clipboard operation
pytensor copied to clipboard

PyTensor allows you to define, optimize, and efficiently evaluate mathematical expressions involving multi-dimensional arrays.

Results 395 pytensor issues
Sort by recently updated
recently updated
newest added

### Describe the issue: In our production code we have a pytensor graph that has cache misses after an initial run with identical code. This causes process launches to be...

bug

This PR addresses the documentation of the `inplace_on_inputs` method as requested in the issue. The method allows `Op` classes to create inplace versions of themselves for specified inputs, enabling memory...

First, why is this restricted to Sum/Prod instead of all CAReduce? https://github.com/pymc-devs/pytensor/blob/79444a3110a5e17ac88006c71dae5360666c4487/pytensor/tensor/rewriting/math.py#L1813-L1827 Second not sure about using `None` as canonical. If we don't allow `None` at the Op level we...

graph rewriting
Op implementation

### Description When doing cumsum/cumprod on an axis that has static shape of 1, we have a no-op. We should rewrite it away This can ve registered as a canonicalization...

beginner friendly
graph rewriting

### Description If aixs = None numba will try to iterate over None and error (TODO: Add MWE)

bug
help wanted
beginner friendly
numba

### Description We have some limited support for PyTorch and coming up MLX, which are at a stage where they don't yet integrate well enough with PyTensor in general. I...

release
backend compatibility
refactor

## Description `pixi.toml` isn't actually used for anything, so it's only for people who deliberately want to use pixi, but I did add a CI check to ensure that everything's...

### Description This special behavior of `None` together with advanced indexing is very rarely used in practice and increases the complexity of our rewrites as we need to always check...

beginner friendly
Op implementation
indexing

### Description RVs broadcast batch inputs by default. We are not avoiding materializing broadcast (which in PyTensor is always dense), similar to how we're failing to do it with #1561...

graph rewriting
random variables
vectorization
memory_optimization

### Description Advanced indexing broacast indices implicitly, so in the following case there's no reason to allocate several ones: ```python import pytensor import pytensor.tensor as pt x = pt.matrix("x") out...

graph rewriting
vectorization
indexing
memory_optimization