trufty
trufty
Its a custom dataset somewhat similar to the Oxford flowers dataset. I was successfully using v0.26.4 with `amp = True`. I haven't tried the latest version yet. I'm using the...
I just tested 0.27.4 and I also get NaN loss again with `amp = True` within the first 2k steps. So I rolled back to 0.26.4 and I'm already at...
I assumed I was missing a python package named ldm, which was incorrect. I just needed to append a system path `sys.path.append('.')` so it includes the ./ldm folder scripts.
below `import sys` in the pipeline
Yea `create_model_from_pretrained()` looks good! Should make transitions a little smoother. > The leading dimension is the batch dimension Ah, thanks for the explanation. Batching isn't a huge deal to me...
I'm testing with the latest version 1.9.6 since I saw your change. But I also noticed the same behavior on 1.7.x as well.
Just to confirm I'm not going crazy, I interrogated the `fp16 = True` model from the script above, and the dtype of all layers are float32 😢 ``` unets.0.final_conv.weight |...
Using `imagen-pytorch==1.10.0` I'm still getting all float32 model layers with the above scrip with `fp16 = True` I verified the install version with `pip list` and deleted the existing checkpoint...
Yea, If its working for you, I has to be a local env issue... uggh. Thanks for helping so far. (and yea I had the fp16 flag set correctly)
4590 MB vs 4388 MB is the difference I see which still matches the 4% I mentioned earlier. I just expected a much larger difference than that.