einops icon indicating copy to clipboard operation
einops copied to clipboard

Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

Results 60 einops issues
Sort by recently updated
recently updated
newest added

In pytorch, we have 'expand_as' which check dim before expand. I'm aware of 'repeat' layer as replace for 'expand' but could you add 'repeat_as' as expand for 'expand as' ?...

Like torch.tensor.expand

feature suggestion

My sense in that in many cases the size of new axis should match the size of an existing axis on a different tensor. I wonder if a helper function...

feature suggestion

Hello, I'm just throwing an idea, I'm not sure it fits in the scope of Einops, and it will probably require a lot of work, but I think it would...

feature suggestion

opt_einsum: https://optimized-einsum.readthedocs.io/en/latest/ Not sure how integration would look like. Maybe with a module flag for "einsum optimizer" (`EINSUM_OPT in ['opt_einsum', None]`). Since the einsum part should work the same for...

need to investigate if backend packages make strides available for analysis (or at least as_contiguous). This may help with optimizations

question

It would be nice to have it, but there are problems with backends - numpy.logaddexp.reduce is available (scipy.special.logsumexp is better, but I can't use it) - tf.reduce_logsumexp is available -...

enhancement

- normally, it is an `mxnet` issue - seems it was like for ages (see code around `MXNET_SPECIAL_MAX_NDIM`) --- After digging into mxnet: - neighboring reduced axes (and non-reduced axes)...

backend bug

Thanks for making einops, it's a really helpful package! **Describe the bug** When I try to use `reduce()` on a [`torch.bfloat16` tensor](https://pytorch.org/docs/stable/tensor_attributes.html#torch-dtype) , I get an error: NotImplementedError: reduce_mean is...

bug