Daniel Coquelin
Daniel Coquelin
## Description Optimizations for `pow` Issue/s resolved: #789 although resolved is a strong word...more work required ## Changes proposed: - avoid calling `_binary_op` unless absolutely necessary and use a simpler...
## Description jitting for skew and kurtosis. possibly more changes to come here, this is the branch to be used during benchmarking and testing of the moment functions ## Changes...
## Description implement the `__partitioned__` attribute to the DNDarray for compatibility with daal4py (https://github.com/IntelPython/DPPY-Spec/issues/3). At the moment, this is not used by heat internally. However, There are some ideas about...
**Feature functionality** see title **Motivation** When using ht.allclose with float32, we expect to have lower precision. But the current limits on `ht.allclose` are derived from numpy (https://numpy.org/doc/stable/reference/generated/numpy.allclose.html) which uses 64...
**Feature functionality** In HPC data analytics we often encounter the problem that there is not one large .h5-file to be processed, but instead many many single files (e.g., csv or...
**Feature functionality** Modify DASO to only use torch's distributed package to make full use of torch's `Future` class in the distributed methods and to take advantage of the NCCL's GPU...
**Description** Simple operations are showing a massive slowdown vs numpy/torch when run on a single mpi process. **To Reproduce** Steps to reproduce the behavior: 1. Which module/class/function is affected? -...
Walk through of using DPNNs with DASO or without. needs a dataloader, optimizer, network, loss, train loop, val loop
**Feature functionality** The current dataloaders in heat can be greatly improved by increasing their compatibility with the torch's dataloader methods. For example, loading data from folders would be very useful.
**Feature functionality** downcast the parameters for MPI communications in DP neural networks