burn
burn copied to clipboard
Add Permute Operation with Compatibility to PyTorch's Permute
Summary
The addition of a permute operation in Burn would enable users to rearrange the dimensions of a tensor in a manner similar to PyTorch's permute. This operation is crucial in various deep learning applications, including the reshaping of inputs and facilitating specific tensor operations.
Background
Permute is a widely used operation in deep learning libraries like PyTorch. The function takes a list of integers that defines the desired ordering of the dimensions and returns a new tensor with the dimensions rearranged accordingly. Here's an example of how it works in PyTorch:
import torch
x = torch.rand(2, 3, 4)
y = x.permute(2, 0, 1)
# y's shape is now [4, 2, 3]
Proposal
We should implement a similar permute operation in the Burn deep learning framework. The function signature and behavior would closely resemble that of PyTorch, ensuring that users transitioning from or working with both libraries have a consistent experience.
Potential Challenges
- Error handling for incorrect dimension indices.
- Performance optimization for large tensors.
Alternatives Considered
- Implementing a more generalized
transposeoperation that could handle permute as a special case.
Additional Context
This feature would be particularly beneficial for users who are working on projects like speech recognition, where rearranging tensor dimensions is a common task.
Permut is similar to fn swap_dims(self, dim1: usize, dim2: usize) though we could improve it by having the order as input instead of just the two dims to be swapped.
I'm not sure if we should expose contiguous since it's the Backend responsability to optimize the data layout. Can you show an example where contiguous is necessary?
Converted the task to add permute
Working on it now ...
This is implemented by #1410