WIP: experiment with first class dim objects
Named Dimensions Refactor: Objects Instead of Strings
I'm still working on this, but thought it might be helpful to share what I have so far...
The Key Change
In this version of named-dims, we use objects to represent dimensions instead of plain strings. This allows us to ensure that array axes with shared dimensions are always compatible, eliminating shape errors as long as you stay in dim-land.
The Two Parts of a Dimension
We can think of a dimension as having two components:
-
Size (or length) - This might be known statically or only at runtime.
-
Identity - Just because two tensors happen to have the same length doesn't mean they're compatible. The identity decides if two tensors can be combined. Crucially, if two tensors share the same identity, they must always have the same length.
This is similar to vector spaces in math: you can't add a 3D velocity vector to a 3D position vector, even though both are 3D. The mathematical operations care about the meaning of the dimensions, not just their size.
Implementation: Types and Variables
We implement this split using PyTensor's type system:
- Each dimension has a unique PyTensor
Type(an instance ofDimType) for its identity - When we need to work with the dimension (like creating tensors), we also need its size, represented as a
DimVariable.
# Create a new dimension
>>> foo = px.dim("foo")
>>> foo.type
BasicDim(foo, uuid=?)
The object foo itself is a DimVariable - at runtime, this represents the size of dimension foo.
Creating Tensors with Dimensions
>>> x = px.basic.zeros(foo, name="x")
>>> x.type
XTensorType(float64, shape=(None,), dims=(BasicDim(foo, uuid=?),))
The tensor x remembers the identity of dimension foo in its type. It doesn't need to store the DimVariable separately because it can recreate one from the tensor itself when needed:
>>> x.dims[0].dprint();
FromTensor{dim_type=BasicDim(foo, uuid=?)} [id A] 'foo'
└─ XTensorFromTensor [id B] 'x'
├─ Alloc [id C]
│ ├─ 0.0 [id D]
│ └─ TensorFromScalar [id E]
│ └─ Length [id F]
│ └─ foo [id G]
└─ foo [id G]
Ensuring Dimension Uniqueness
To prevent shape errors, we need to avoid having two unrelated DimVariables with the same type. Every call to px.dim() creates a truly unique dimension:
>>> foo = px.dim("foo")
>>> foo2 = px.dim("foo") # Same name, different dimension!
>>> foo.type == foo2.type
False
We use random UUIDs in the type to guarantee uniqueness.
The size Invariant
For consistent graphs, we maintain this invariant: "During function execution: If two DimVariables have the same type, their runtime values are also the same".
This works because DimVariables can only be created in three ways:
- Root variables -
px.dim()creates a new unique type, so it can't share its type with anything else. - From tensors - We must have had an existing
DimVariableto create the tensor, so length is consistent, or the tensor was user provided. For that case we must a a consistency check of the user input. - Derived from other
DimVariables - If inputs are consistent, outputs are too
The main challenge is user input validation - we need to verify that input tensors match their declared dimensions before execution.
Small sidenote:
Unfortunately there is a way users can create two unrelated DimVariable objects with the same type:
foo = px.dim("foo")
foo2 = foo.type()
But if we assume that foo.type() is a private function (or maybe we can override the call method to make that clearer), that shouldn't be too much of a problem. We just have to make sure we don't do it ourselves when we add new Ops...
Derived Dimensions
I think we can do a lot of cool things with derived dimensions, but I'm still working on those.
One simple example that already works is a ClonedDim. We don't allow duplicate dimensions in one tensor to simplify indexing and xarray compatibility, but in many cases a user might still need the essentially same dim in a tensor twice (for instance for a covariance matrix). We can use a cloned dimension for that. A cloned dimension always has the same length as its base dimension, but it has a new identity. So for instance:
>>> foo = px.dim("foo")
>>> # This fails
>>> px.xtensor("x", dims=[foo, foo])
ValueError...
>>> foo2 = foo.clone_dim()
>>> x = px.xtensor("x", dims=[foo, foo2])
@OriolAbril @ricardoV94
📚 Documentation preview 📚: https://pytensor--1517.org.readthedocs.build/en/1517/
>>> foo = px.dim("foo")
>>> foo2 = px.dim("foo") # Same name, different dimension!
>>> foo.type == foo2.type
False
Is this still true? I would think when a user is working they may specify dims by label, so they say x.sum("city") and under the hood this would work fine, because we can convert the user string into a BasicDim that matches in equality anyway?
Also thinking user can do x.rename({SliceDim("time"): "time*"}) whitout having to worry exactly how the SliceDim("time")" is created or where to retrieve it from.
In the current code that is still true. I think we can get away with allowing those to be equal (if we do some extra input validation, you shouldn't be allowed to pass to different values for the length of the same dimension). I'm not sure if we should want them to be equal however. In pytensor, we also don't assume that two tensor variables are the same, just because they happen to have the same name.
We can treat dims as symbols (not sure if that's the term), since in xarray dataset you can't have duplicate dims having different meaning either?
But it's a choice not a requirement
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Looking good. Do you already have any op that generates its own dims working?
I'm currently working on shape.py with stack, unstack etc. Should be coming soon. That and indexing are the majority of tests that are still failing.