heat
heat copied to clipboard
Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python
**Feature functionality** Design and implement a distributed sparse DNDarray class. Come up with one or more reasonable storage formats (e.g. COO, CRS, CSC, ...) **Additional context** PyTorch implements COO only...
The Dockerfile provides some flexibility in selecting which version of HeAT should be inside the Docker image. Also, one can choose whether to install from source or from PyPI. ##...
## Description Issue/s resolved: # ## Changes proposed: - - - - ## Type of change ## Memory requirements ## Performance ## Due Diligence - [ ] All split configurations...
## Description Issue/s resolved: # ## Changes proposed: - - - - ## Type of change ## Due Diligence - [ ] All split configurations tested - [ ] Multiple...
**Related** Related to advanced indexing in the `setitem`. **Feature functionality** [See torch definition.](https://pytorch.org/docs/stable/generated/torch.Tensor.index_put_.html) Essentially, its the same as advanced indexing, however with the option to deal with duplicate indices by...
Currently branches off `main` **Expected behavior** Should branch off `releases/1.2.x`
## Description Issue/s resolved: # ## Changes proposed: - - - - ## Type of change ## Memory requirements ## Performance ## Due Diligence - [ ] All split configurations...
## Description When reshaping distributed DNDarrays: - if `new_split` is the same as the original split, and - if distribution (lshapes) allows then reshape locally via pytorch, stitch `local_reshaped` tensors...
**Related** #870 **Feature functionality** As per #945, it is possible to pass a condition to `_operations.__binary_op` via the kwarg `where`. This feature is available in `ht.divide`. Extend feature to all...