torchTT
torchTT copied to clipboard
Tensor-Train decomposition in pytorch
torchTT
Tensor-Train decomposition in pytorch
Tensor-Train decomposition package written in Python on top of pytorch
. Supports GPU acceleration and automatic differentiation.
It also contains routines for solving linear systems in the TT format and performing adaptive cross approximation (the AMEN solver/cross interpolation is inspired form the MATLAB TT-Toolbox).
Some routines are implemented in C++ for an increased execution speed.
Installation
Requirements
Following requirements are needed:
-
python>=3.6
-
torch>=1.7.0
-
numpy>=1.18
-
opt_einsum
The GPU (if available) version of pytorch is recommended to be installed. Read the official installation guide for further info.
Using pip
You can install the package using the pip
command:
pip install git+https://github.com/ion-g-ion/torchTT
One can also clone the repository and manually install the package:
git clone https://github.com/ion-g-ion/torchTT
cd torchTT
python setup.py install
Using conda
TODO
Components
The main modules/submodules that can be accessed after importing torchtt
are briefly desctibed in the following table.
Detailed description can be found here.
Component | Description |
---|---|
torchtt |
Basic TT class and basic linear algebra functions. |
torchtt.solvers |
Implementation of the AMEN solver. |
torchtt.grad |
Wrapper for automatic differentiation. |
torchtt.manifold |
Riemannian gradient and projection onto manifolds of tensors with fixed TT rank. |
torchtt.nn |
Basic TT neural network layer. |
torchtt.interpolate |
Cross approximation routines. |
Tests
The directory tests/ from the root folder contains all the unittests
. To run them use the command:
pytest tests/
Documentation and examples
The documentation ca be found here. Following example scripts (as well as python notebooks) are also provied provided as part of the documentation:
- basic_tutorial.py / basic_tutorial.ipynp: This contains a basic tutorial on decomposing full tensors in the TT format as well as performing rank rounding, slicing (Try on Google Colab).
- basic_linalg.py / basic_linalg.ipynp: This tutorial presents all the algebra operations that can be performed in the TT format (Try on Google Colab).
- efficient_linalg.py / efficient_linalg.ipynb: contains the DMRG for fast matves and AMEN for elementwise inversion in the TT format (Try on Google Colab).
-
automatic_differentiation.py / automatic_differentiation.ipynp: Basic tutorial on AD in
torchtt
(Try on Google Colab). - cross_interpolation.py / cross_interpolation.ipynb: In this script, the cross interpolation emthod is exemplified (Try on Google Colab).
- system_solvers.py / system_solvers.ipynb: This contains the bais ussage of the multilinear solvers (Try on Google Colab).
- cuda.py / cuda.ipynb: This provides an example on how to use the GPU acceleration (Try on Google Colab).
- basic_nn.py / basic_nn.ipynb: This provides an example on how to use the TT neural network layers (Try on Google Colab).
- mnist_nn.py / mnist_nn.ipynb: Example of TT layers used for image classification (Try on Google Colab).
The documentation is generated using shpinx
with:
make html
after installing the packages
pip install sphinx sphinx_rtd_theme
Author
Ion Gabriel Ion, e-mail: [email protected]