LlamaGen icon indicating copy to clipboard operation
LlamaGen copied to clipboard

About train losses and evalution parameters setting

Open MrCrims opened this issue 6 months ago • 0 comments

Hello, I'm confused about my traing on GPT-XL image size 384×384. After 300 epoches, the traning loss is around 6.9, and FID is just 6.8. Could you share your detailed training settings? Besides, when I try to use c2i_XL_384.pt to reproduce your results, my best results is 2.73 different with 2.62 in the paper. Could you share your detailed evalution settings including cfg-scale ,etc ? I will be grateful if you could help me.

MrCrims avatar Aug 15 '24 00:08 MrCrims