GRAN
GRAN copied to clipboard
Issue while Training on CPU
Hi, I just tried to train the model on a cpu but i ran into some Problems. While Training i always get the output message, that the loss at iteration x is 0 which seems kinda odd:
NLL Loss @ epoch 0001 iteration 00000001 = 0.0000 NLL Loss @ epoch 0063 iteration 00000250 = 0.0000
After going through the code of gran_runner i realized that the part of the code, where the loss is calculated is never called when there is no gpu available since batch_fwd is empty in that case:
https://github.com/lrjconan/GRAN/blob/43cb4433e6f69401c3a4a6e946ea75da6ec35d72/runner/gran_runner.py#L230-L259
Is this a bug or did i miss something?