Scans are never constant-folded
Description
import pytensor
import pytensor.tensor as pt
x0 = pt.scalar("x")
xs, _ = pytensor.scan(lambda x: x+1, outputs_info=[x0], n_steps=4)
fn = pytensor.function([x0], xs)
fn.dprint() # Scan still in the graph
This happens because Alloc never constant-folds if used by a SetSubtensor, as most times we want to write in place (and we can't write in-place of constants). But when the whole chain could ultimately be constant-folded (as here), this is wasteful.
The logic for whether to constant-fold based on the graph or not should be the responsibility of the constant-fold rewrite, not the Op. Right now it's implemented here.
AllocEmpty never constant_folds:
https://github.com/pymc-devs/pytensor/blob/17c675a2d908661f9d1f84ae56f4cb6f5c4fa7c8/pytensor/tensor/basic.py#L4397-L4398
And Alloc has this logic in it:
https://github.com/pymc-devs/pytensor/blob/17c675a2d908661f9d1f84ae56f4cb6f5c4fa7c8/pytensor/tensor/basic.py#L1780-L1818