Arraymancer icon indicating copy to clipboard operation
Arraymancer copied to clipboard

A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends

Results 118 Arraymancer issues
Sort by recently updated
recently updated
newest added

I have three suggestions: 1. Export `cumsum` procedure: https://github.com/mratsim/Arraymancer/blob/1a2422a1e150a9794bfaa28c62ed73e3c7c41e47/src/arraymancer/ml/clustering/kmeans.nim#L26 ```nim proc cumsum[T: SomeFloat](p: Tensor[T]): Tensor[T] {.noInit.} = ## Calculates the cumulative sum of a vector. ## Inputs: ## - p:...

The following bench, reduced to only call `linear` which is just a thin wrapper around BLAS, takes 1.6s without `-d:openmp` and 15s with `-d:openmp` ```Nim import ../src/arraymancer # Learning XOR...

optimization
bug
OpenMP

In Fortran it is possible to declare a procedure "elemental", meaning that though it is defined in terms of scalar arguments and return types, it can also be applied element-wise...

I will list here some functions that I miss in the API and keep updating as I need them, for start I will list some useful functions that I will...

key feature

Using numpy we can convert 1D array to 2D array as, ``` a=np.arange(1,10) a.reshape((3,-1)) array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) ``` I am a newbie arraymancer user....

Since Nim 1.2 Nim now adds -std:gnu++14 to the C++ target which we cannot avoid even after https://github.com/nim-lang/Nim/issues/13933 because the "override" only reorders the arguments passed to the compiler. Even...

need upstream fix
regression
Cuda

Hello! I implemented a simplified version of BatchNorm layer (1-dimensional input only, no momentum), by using Linear layer as the reference example. So I was able to add this layer...

Documentation
autograd

The most important blocker for Vulkan support was finding some AXPY example to understand how to use/allocate arbitrary sized buffers without using a texture hacks (like in OpenGL before Cuda/OpenCL...

enhancement

For Tensors stored on a CPU, there seems to be no implementation of `softmax_backward` as referenced [here](https://github.com/mratsim/Arraymancer/blob/master/src/arraymancer/nn/activation/softmax.nim#L25) I looked through the source when that file was added, and didn't find...

currently, I use ``` nim c -d:release -d:danger -d:openblas -d:blas=libopenblas ex06_shakespeare_generator.nim ``` in msys2+mingw64 on my windows 10 64 bits. The released exe file needs some dlls which are summed...