heat icon indicating copy to clipboard operation
heat copied to clipboard

Distributed tensors and Machine Learning framework with GPU and MPI acceleration in Python

Results 245 heat issues
Sort by recently updated
recently updated
newest added

A consortium is defining an Array API: https://data-apis.org/array-api/latest/index.html It seems desirable for HeAT to be as compliant as possible. The following features are missing in HeAT (others are probably incomplete):...

GSoC
array API

Does heat have (or plan to have) an implementation of linear operators like the one in spicy https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.LinearOperator.html#scipy.sparse.linalg.LinearOperator? I think cupy now has similar stuff. For large sparse matrix like...

GSoC22

**Description** A clear and concise description of the bug and the associated functionality. The edge case of performing ht.diff(array) along an axis of size 1 is handled differently when array...

bug
:exclamation:
arithmetics

Walk through of using DPNNs with DASO or without. needs a dataloader, optimizer, network, loss, train loop, val loop

documentation
Data-parallel NNs / DASO

**Feature functionality** The current dataloaders in heat can be greatly improved by increasing their compatibility with the torch's dataloader methods. For example, loading data from folders would be very useful.

enhancement
heat-dev week

**Feature functionality** Since torch 1.7 other backends than CPU and CUDA are supported. Among them is OpenCL. HeAT should be able to use it. Pay close attention whether torch's OpenCL...

enhancement
interoperability

**Feature functionality** Since torch 1.7 other backends than CPU and CUDA are supported. Among them is the HIP, aka AMD's CUDA, backend. HeAT should be able to use it. Moreover,...

enhancement
good first issue
API
GSoC22

[...] generally however I would obviously like to have something like a `ht.view` of the data, especially silent return of it. I am not 100% sure whether we can actually...

API

the way we do it right now is e.g., `ht.zeros(5).larray + torch.randn(5)` and then wrap it into a (distributed) dndarray again. It would be less frustrating if this step were...

enhancement
student project