Jinkin
Jinkin
Hi, @windj007. When using Places for training, why doesn't lama scale the image to 256 before cropping, is it more meaningful than directly making 256x256 crops ?
I see when training, they use the function "sample", but in sample_fast.py, they use function "sample_with_past", this may cause the incoherence ? But I don't understand what sample_with_past is doing,...
btw, I still struggle with training the Net2Net transformer model, I can not get good sample images in my log. can you share the config files of yours? and how...
Just check your lib's version is right. - pytorch-lightning==1.0.8 and - omegaconf==2.0.0.
this issue mention the configs. https://github.com/CompVis/taming-transformers/issues/174
@lhoestq hello, i still have problem with loading json from S3: storage_options = { "key": xxxx, "secret": xxx, "endpoint_url": xxxx } path = 's3://xxx/xxxxxxx.json' dataset = load_dataset("json", data_files=path, storage_options=storage_options) and...
thanks for your suggestion,it works now !
+1. looking forward to the code. intersting project.