Marcelo5444
Marcelo5444
Hi @BrianPugh, Have you done any further research on that?
As anyone solve this issue, I am also facing this
Hi! I am facing some issues related with this. I am just fine tunning with LAMA. At first in order to do sannity check, I tried to just overfit to...
Hi, The model have been trained in images between [0,1] or [0,255]. Looking at the transformations, it looks like the latter.
config = OmegaConf.load('configs/imagenet_vitvq_small.yaml') model = initialize_from_config(config.model) model.init_from_ckpt('/home/marcelo/Downloads/imagenet_vitvq_small.ckpt') def preprocess(img): s = min(img.size) if s < 256: raise ValueError(f'min dim for image {s} < 256') r = 1024 / s s...
So, after your training, you obtain a better model weights that improve the reconstruction?
I started the same project today but you are head of me. Maybe you need to drop the last fc layer of the backbone right?
Thank you so much for the help!Also looking at the folder of the DWT-IDWT I have a question:What are the difference between the DWT_IDWT_layer and the DWT_IDWT_CS? I have found...