Lior Uzan
Lior Uzan
Thank you for your amazing work! That is understood, but my situation requires installing (not using) the package on a server with no GPU. If you had a .whl file,...
That's fine, it gives you the states table it uses so you can manually change things per weight tensor (if you wish), but you don't actually pass it back to...
bump, would also like to know how optim was wrapped
I'm not sure that's the right way of using it, It would be more efficient to call autograd.optim.adam() just once and reuse the returned function (because otherwise the optimState is...
reporting the same issue. running on 4 P100 gpus, ubuntu 16.04, this also happens to me exactly as reported.
try my fork: https://github.com/ghostcow/caffe
The issue stems from faster-whisper having not specify a version in [requirements.txt](https://github.com/SYSTRAN/faster-whisper/blob/master/requirements.txt): ctranslate2>=4.0,
can this be merged?
@jbschlosser what variance did you see in your results?