BinaryNet.pytorch
BinaryNet.pytorch copied to clipboard
Is there any reduction in memory?
Hi, Thank you for your pytorch version of BinaryNet.
I am wondering is there any reduction in memory. I call the function Quantize() in the file binary_modules so that I can compact each parameter to 8 bits. However, CPU still allocate 32bits to each float number, as aresult, there is no memory reduction ? Do you have any ideas?
Looking forward to your reply
Just wild guessing here but I think that changing the dtype of the tensor should do the work.