DALLE2-pytorch icon indicating copy to clipboard operation
DALLE2-pytorch copied to clipboard

Loss is larger when using a 2B-parameter model

Open YUHANG-Ma opened this issue 2 years ago • 0 comments

Hi everyone, I modified the model to 2B parameters with the following config. image However, when I am training, in the case of the same epoch, the effect of the loss function convergence (to about 0.06) is not as good as when the model parameters are relatively small (900M) which is 0.04. I am wondering if this is a normal phenomenon because of more parameters of UNet.

YUHANG-Ma avatar Jul 22 '22 03:07 YUHANG-Ma