Corey Lammie

Results 20 comments of Corey Lammie

Hi @MySHworks, Thank you for reaching out! Could you please attach your simulation routines or minimal working examples so I can investigate further? The accuracy degradation observed will depend on...

Hi @MySHworks, Apologies for my late response! I have been focused on implementing support for comprehensive modeling of source and line resistance. I can confirm that your original issue was...

Hi @MySHworks, This fix has been merged to master in #88. I have tested it using `torch.nn.Sequential` containers with and without explicitly named layers (using `OrderedDict`), and with and without...

Hi @MySHworks, No problem at all! Looking at the most recent PyTorch documentation, it appears that both `torch.nn.ModuleList` and `torch.nn.ModuleDict` can be used to store sub-modules and `torch.nn.Sequential` containers. It...

Hi @MySHworks, Methods defined in [memtorch.bh.Quantize.py](https://github.com/coreylammie/MemTorch/blob/master/memtorch/bh/Quantize.py) and `memtorch_binding.quantize()` call C++ functions defined in [https://github.com/coreylammie/MemTorch/blob/master/memtorch/cpp/quantize.cpp](https://github.com/coreylammie/MemTorch/blob/master/memtorch/cpp/quantize.cpp) using `pybind11` bindings. Previous versions of MemTorch have used quantization routines from the [pytorch-playground](https://github.com/aaron-xichen/pytorch-playground), which should...

Hi @MySHworks, Apologies for the delayed response! I have looked into this further, and both issues should now be fixed in #110.

Hi @nikhil-garg, Definitely. I have created three separate issues for (`torch.nn.RNN` and `torch.nn.RNNCell`), (`torch.nn.LSTM` and `torch.nn.LSTMCell`), and (`torch.nn.GRU` and `torch.nn.GRUCell`) modules in #92, #93, and #94, respectively. Currently, #85 has...

@jubueche no, neither work. The error is different for `bfloat16`, `float16`, and for the `torch tile` and `CUDA bindings`. MWE: ``` import torch import torch.nn as nn import torch.nn.functional as...

> @coreylammie in which GPUs did you try this one? A100_80GB. Once we do figure this out, it would be great to add an example for it. I intend on...

> @coreylammie Note that your MWE was not even training in FP32. I have changed it to the below: > > ``` > import tqdm > import torch > import...