candle
candle copied to clipboard
Misleading `Tensor::matmul` documentation
First off, great job on this project!
I noticed that Gemm (at least for cuda from what I've seen) is not supported for tensors with dims > 4, however, the docs here seem to describe a tensor with arbitrary dimensions https://docs.rs/candle-core/latest/candle_core/struct.Tensor.html#method.matmul. It would be helpful to document this limitation.
This also seems to be a problem on the CPU backend. Using matmul with tensors of rank > 4 results in an MatMulUnexpectedStriding error with msg: "non-contiguous lhs" irrespective of whether the tensors involved are actually contiguous.