DREAM
DREAM copied to clipboard
[WACV 2024] DREAM: Visual Decoding from Reversing Human Visual System
In this table, it seems that DREAM used the evaluation performance of Subject-1, while other papers (like MindEye) used the average performance across 4 subjects, is that fair? 
Thanks for your great paper. I have a confusion where is the “train_enc_rgbd.sh ” in your code > Train RGB-only Encoder (supervised-only): > python $(scripts/train_enc_rgbd.sh) i will appreciate it if...
clip_caps[i] = clip_text.mean(0) ~~~~~~~~~^^^ RuntimeError: The expanded size of the tensor (59136) must match the existing size (768) at non-singleton dimension 0. Target sizes: [59136]. Tensor sizes: [768] 
Hi! Thanks for the excellent work. I notice that the Mind-eye paper has reported their retrieval accuracy in test dataset (982 images), have you done the related experiments? What's about...
Hello author, I would like to ask about the image generation part. When using coadapter, have you trained the coadapter, or did you use it directly? Also, is the generated...