CEN icon indicating copy to clipboard operation
CEN copied to clipboard

How can get the datasets of "train" and "val"

Open PatrickWilliams44 opened this issue 3 years ago • 4 comments

It's my pleasure for me to see your paper "Deep Multimodal Fusion by Channel Exchanging",and I have downloaded the corresponding code from github, but how can I get the "train" dataset and "val" dataset? Looking forward to your early reply! Thank you!

PatrickWilliams44 avatar May 03 '22 02:05 PatrickWilliams44

Hi, thanks for your interest.

segmentation dataset: https://drive.google.com/drive/folders/1mXmOXVsd5l9-gYHk92Wpn6AcKAbE0m3X image translation dataset: https://github.com/alexsax/taskonomy-sample-model-1

Both segmentation and image translation codes provide train and val splits:(https://github.com/yikaiw/CEN/tree/master/semantic_segmentation/data/nyudv2, https://github.com/yikaiw/CEN/tree/master/image2image_translation/data).

yikaiw avatar May 03 '22 05:05 yikaiw

Dear yikaiw: Thanks for your reply! Do I need to combine the nyudv dataset containing "depth", "masks" and "rgb" with the nyudv2 dataset containing "train.txt" and "val.txt" in the same folder?

PatrickWilliams44 avatar May 03 '22 06:05 PatrickWilliams44

It is not necessary. You can place the dataset folder (that contains "depth", "masks" and "rgb") to any path, as long as you modify the data path in https://github.com/yikaiw/CEN/blob/40f277ed1a377a3c81f979a6c534ae268773aa9d/semantic_segmentation/config.py#L5

yikaiw avatar May 03 '22 07:05 yikaiw

Thanks for your patient answer. Now due to the limitation of my hardware device, I can only try to run it on the cpu. The loading time of the training process is relatively long. I will continue to have a deep understanding of this network. Thank you again for your guidance!

PatrickWilliams44 avatar May 04 '22 05:05 PatrickWilliams44