Arraymancer icon indicating copy to clipboard operation
Arraymancer copied to clipboard

A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends

Results 118 Arraymancer issues
Sort by recently updated
recently updated
newest added

Nim added hot-code reloading support at the language level. Arraymancer should be tested with it to: - show how to use it in a hot code reload setting - ensure...

Now that Travis supports ARM, it would be useful to test it to ensure Arraymancer is usable on low-powered devices and smartphones.

tests

See: https://www.reddit.com/r/MachineLearning/comments/dla75r/d_retrain_your_models_the_adam_optimizer_in/ https://github.com/pytorch/pytorch/pull/22628

from #384 > While am at adding new features to PCA, I'm considering a couple of additional options: > > - Adding a preprocess enum to the signature: > ```Nim...

Instead of using `ValueError` and `IndexError` everywhere it is probably more informative to introduce more descriptive error types. For example: - IncompatibleShapeError (for tensors that can't be multiplied together) -...

enhancement

As mentioned in https://github.com/nim-lang/Nim/issues/12036#issuecomment-524890898 emit interpolation does not use backtick, unlike quote do and template identifier injection. A thorough review of emit interpolation should be done to avoid future breakage....

Edited: OpenMP -> Archlinux, I never run the full test suite on Arch due to the impossibility to do "nimble test -d:blas=cblas" as contrary to Debian/Ubuntu and Travis, the BLAS...

bug
Distro specific

Two variants of already implemented Arraymancer optimizers have been gaining traction recently, specifically the AdamW and SGDW optimizers. Both of these were proposed in the 2017 paper [_Decoupled Weight Decay...

Currently only stochastic gradient descent is supported, at the very minimum it would be nice to support: - [ ] RMSProp - [x] Adam - [x] SGD with Momentum -...

key feature
autograd

After #363 we have einsum, but we can't backprop through it. Einsum backprop implementation: - https://github.com/rsokl/MyGrad/blob/5ea7b929108c90a83c37c3d619cef0a6f6596e28/mygrad/linalg/ops.py#L161-L280 - https://github.com/tensorflow/tensorflow/blob/eb9b2659f831c5ac60eb833dc4d04a161e588833/tensorflow/python/ops/special_math_ops.py#L137-L157 I'm not too sure how it works in PyTorch, seems magical: https://github.com/pytorch/pytorch/pull/6307/files

autograd