何森
何森
my batchsize is 4, I use single GPU to train the model. Even in CelebA dataset, I use your original code to create the lmdb data and then train the...
I increased the --batch from 4 to 8, decreased --lr from 0.002 to 0.0002 and set --g_reg_every to be larger than the total training iterations. The loss still become NaN...
Please check the in the [code](https://github.com/SenHe/Flow-Style-VTON/blob/cbc0bdf84d9b6877d916f625052cb108ebd881b5/train/train_PBAFN_stage1_fs.py#L184)
Please make sure you have read and understood all the code and then ask this question.
please refer to [this issue](https://github.com/SenHe/Flow-Style-VTON/issues/6).
please check your input size, the PyTorch version you are using, etc.
please refer to a [previous issue](https://github.com/SenHe/Flow-Style-VTON/issues/12).
Please check [here](https://github.com/SenHe/Flow-Style-VTON/blob/1e5df26abab4ef1fce9ec75cc27b250c011fe639/test/test.py#L111). You can amend to visualize the sampling grid or the offset.
Please refer to previous issues [#11](https://github.com/SenHe/Flow-Style-VTON/issues/11), [#3](https://github.com/SenHe/Flow-Style-VTON/issues/3)
> Hello, thanks a lot for opening your source code. Now I am reimplementing your work. But the metrics from the trained model are not the same as the papers....