Malte J. Rasch

Results 31 comments of Malte J. Rasch

The ``scale_down`` and ``scale_up`` parameters are the minimal update step sizes with device to device variations (see e.g. [how it is used in case of the linear step device](https://github.com/IBM/aihwkit/blob/master/src/rpucuda/cuda/rpucuda_linearstep_device.cu#L37)). If...

Hi @ZhenmingYu: The `applyWeightUpdate` function is essentially not used currently (and not exposed to pytorch) and is only an experimental mechanism we experimented with from C++ to simulate (approximative) data-parallel...

Hi @SaBaKa2020 , thank you for your question (best to open up a new issue for any new question, to make it easier to find). We have the option to...

Hi @SaBaKa2020 , thanks again! Currently, the simulator does not do bit-slicing in the way you are describing it by default, however, this is definitely on our roadmap to add...

Hi @frmar440 , many thanks, that sounds like a great contribution! Please reach out if you have any questions.

Maybe the overloading of the persistent weights makes some troubles, but in principle as long as you set the weights in the update_once they should be reflected in the python...

Hi @mz11235, thanks for raising the issue. I have tried to reproduce it but for me it trains also for `n_epoch=100`. ``` 09:40:30 --- Started LeNet5 Training 09:40:55 --- Epoch:...

I am closing this issue for now. Please re-open if you find that the problem persist

Hi @nkyungmi, many thanks for your input! This is great and very helpful in general. Note, however, that most device models will respect the symmetry-point with its parameter settings already....

The package install does not support CUDA at the moment. For CUDA installation you need compile the code yourself. You can follow the development installation instructions . Which OS are...