hikettei
hikettei
we aim to generalize APIs and optimization techniques around different architecture computers, that is, we also have to make support for GPU removing CPU dependencies because cl-waffe2 was originally designed...
今取り組んでることとか課題とかのTODO List - [ ] Docgen周りのリファクタリング (気が向いたら) - [ ] metal backend - [ ] cl-waffe2/nn -> generic - [ ] Stable Diffusion Inference # Environments / Backends - [...
This PR is work in progress ## Changes - Simplified Arithmetic Operations: From `Add/Sub/Mul/Div, ScalarAdd/Sub/Mul/Div, Inverse` -> `Add/Sub/Mul/Div` - renamed some functions to keep consistency: - `!inverse` -> `!reciprocal` -...
- TODO - [ ] Step-by-step tutorial for those who aren't familiar with Deep Learning. - [ ] Tutorial written in Jupyter Notebook - [ ] Translate `tutorial_jp.lisp` into English...
TODO
1. ~~Optimize the overhead of generic functions `call-forward`/`call-backward` (Less Important)~~ It's done. 3. ~~make print-object rich~~(Done)
Almost features are based on mgl-mat so it is not going to be a rocky road. If possible, I want backends to support FP16
To update the computation nodes, (setf !aref) needed to be called like: ```lisp (setq tensor (setf (!aref tensor ~) x)) ``` This should be rewritten with macro, being easy to...