litenn
litenn copied to clipboard
Lightweight machine learning library based on OpenCL 1.2
LiteNNlightweight Machine Learning library based on OpenCL 1.2 and written in pure python | ||
| suitable for most popular ML tasks such as regression, recognition, classification, autoencoders, GANs. | ||
|
| ||
Features | ||
| written in pure python | Nothing to build from source! No headache with cmake, bazel, compilers, environments, etc. | |
| future-proof | unlike CUDA, OpenCL 1.2 does not break backward compatibility with new video cards, so your app will work on future devices. | |
| Simplified and pytorch-like | PyTorch-like, but more lightweight architecture with simplified things. | |
| Easy to experiment | Implement your own custom GPU-accelerated ops much more faster, using OpenCL C-language as text directly in python. You don't need to compile or build from source. | |
| user namespace |
litenn is namespace for users.
You will not see internal classes or functions in your vscode intellisense hint. All things in litenn namespace are ready to use, contain editor hint, and the source code can be directly explored from your IDE. |
|
| Minimal dependencies | numpy only | |
Getting started | ||
| ||
User guideDeveloper guideLiteNN-apps | ||
|
#machinelearning #machine-learning #deep-learning #deeplearning #deep-neural-networks #neural-networks #neural-nets #opencl | ||