GDSR-DCTNet
GDSR-DCTNet copied to clipboard
Question about data format
Thanks for your impressive work. However, there is one question about data format:
You mentioned that
" If you want to re-train this net, ... , get a training set like ./data/NYU_Train_imgsize_256_scale_4.h5".
However, according to your uploaded code in dir GDSR-DCTNet/dataset_processing/
, it seems that the data format is .npy
.
Besides, you mentioned that
"use the same preprocessing as DKN and FDSR"
However, there seems no preprocessing code which save the data in H5format.
If there is any misunderstanding, feel free to point out. Looking forward to your reply !
I missed this function. It seems meets my question.
Do you have a good data set
Thanks for your reply! Honestly, I am long for it :( Now I am processing the raw NYU V2 dataset to fit your code, and it is not a easy work :( It would be appreciated if you share the processed dataset with me to avoid any misunderstanding and the resulting drops in performance. Thanks a lot!
Is there any kind help? :)
Hi, I am currently preparing a conference submission, and I will contact you as soon as the submission is over. Sorry for the inconvenience.
Best wishes!
Sorry to bother you, and I have tried to conduct the data processing (from NYUV2 .mat to H5 format) according to your provided code and description.
However, the generated H5 file gdsr_dataset_train_imgsize_256_scale_4_aug_True.h5
is only around 1GB. I am afraid that there are some mistakes. The information about the H5file be like:
Keys <KeysViewHDF5 ['HRDepth', 'LRDepth', 'RGB']>
Group "HRDepth": 1449 members with shape of [1,256,256][float32].
<HDF5 group "/HRDepth" (1449 members)> <HDF5 dataset "1": shape (1, 256, 256), type "<f4">
Group "LRDepth": 1449 members with shape of [1,256,256][float32].
<HDF5 group "/LRDepth" (1449 members)> <HDF5 dataset "1": shape (1, 256, 256), type "<f4">
Group "RGB": 1449 members with shape of [3,256,256][uint8].
<HDF5 group "/" (1449 members)> <HDF5 dataset "1": shape (3, 256, 256), type "|u1">
If there is any mistake, feel free to point it out ! Thanks !
很抱歉打扰您,请问您数据集处理好了吗?如果您与我共享处理后的数据集,以避免任何误解和由此导致的性能下降,我们将不胜感激。多谢!