EMLight icon indicating copy to clipboard operation
EMLight copied to clipboard

requirements to run the code

Open weberhen opened this issue 3 years ago • 19 comments

Hi again!

Could you please share your running environment so I can try to run the code with little to no changes in it please? I'm facing some silly problems like conversion from numpy to torch, which I suspect is because you use another pytorch version than mine, otherwise you would have the same problem as me. One example of error I'm getting:

$ Illumination-Estimation/RegressionNetwork/train.py

  • Number of params: 9.50M 0 optim: 0.001 Traceback (most recent call last): File "Illumination-Estimation/RegressionNetwork/train.py", line 82, in dist_emloss = GMLoss(dist_pred, dist_gt, depth_gt).sum() * 1000.0 File "/miniconda3/envs/pt110/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 43, in forward scaling=self.scaling, geometry=geometry) File "/Illumination-Estimation/RegressionNetwork/gmloss/samples_loss.py", line 72, in sinkhorn_tensorized self.distance = distance(batchsize=B, geometry=geometry) File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 79, in init anchors = geometric_points(self.N, geometry) File "/Illumination-Estimation/RegressionNetwork/gmloss/utils.py", line 70, in geometric_points points[:, 0] = radius * np.cos(theta) TypeError: mul(): argument 'other' (position 1) must be Tensor, not numpy.ndarray

Thanks!

weberhen avatar Apr 01 '21 17:04 weberhen

How did you preprocess the raw Laval HDR dataset to get the training runnable? Thanks

goddice avatar Apr 05 '21 23:04 goddice

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

weberhen avatar Apr 06 '21 12:04 weberhen

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

goddice avatar Apr 06 '21 23:04 goddice

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

weberhen avatar Apr 07 '21 13:04 weberhen

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

EMLight needs the image warping operation in Gardner 's work to warp the raw HDR and get input images. But I am not sure whether the codes about warping operation are released here. How do you sample images and warp panoramas? Thanks in advanced.

LeoDarcy avatar Apr 22 '21 12:04 LeoDarcy

There is some trick for the training. I first overfit the model in a small dataset, then train it on the full dataset. Thanks to the intellectual property of the Laval dataset, I am not sure if trained model can be released.

fnzhan avatar May 20 '21 14:05 fnzhan

Hello @fnzhan ,

I work with prof Jean-François Lalonde, one of the creators of the dataset. You can share the trained model, the only thing is that it can be only used for research purposes :)

weberhen avatar May 25 '21 19:05 weberhen

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hi,Would you like share the code that generate the pkl files? or share the Dataset preprocessing code?Wish for you reply!

cyjouc avatar Jun 09 '21 09:06 cyjouc

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

Hi,would you like share the process of the depth and HDR data from Laval HDR dataset?wish for your reply!

cyjouc avatar Jun 15 '21 01:06 cyjouc

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hello sir, this connection may not be open, how to deal with the original Laval data set, if you can tell me, I will be very grateful!

xjsxjs avatar May 12 '22 03:05 xjsxjs

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

weberhen avatar May 28 '22 00:05 weberhen

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

hello!

I am trying to follow your code to crop Fov from Hdr, using 'gen_hdr_crops.py'. But i found that your code using some modules like 'envmap'、'exzer', which i am not familiar with, and wonder how to pip install them. After google, i guess these modules come from ' skylibs'(https://github.com/soravux/skylibs), but still get some error after 'pip install --upgrade skylibs'. So can you share me with your requirements.txt ?

Thanks!

jxl0131 avatar Sep 27 '22 07:09 jxl0131

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

weberhen avatar Oct 04 '22 17:10 weberhen

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

thanks!

I am ok with my enviromment now. I am following your edited GenProjector code, and i found that in your 'Illumination-Estimation-1/GenProjector/data.py' file, function 'getitem', you try to get envmap_exr from '/crop' directory. I can't understand it, i think it should be another directory containing full panorama picture instead crops. wish for your reply!

jxl0131 avatar Oct 09 '22 15:10 jxl0131

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

weberhen avatar Oct 11 '22 15:10 weberhen

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

haha, i can understand what you said! Thanks!

jxl0131 avatar Oct 11 '22 15:10 jxl0131

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi! I am come back again. I found that you mutiply crop with 'reexpose_scale_factor' and save it as crop in your 'gen_hdr_crops.py', is it a process to convert crop from hdr image to ldr image? Why you don't save '_' returned from 'genLDRimage(..)' directly?

EMLight's author tone crop and envmaps in all his 'data.py' code, which i think is to convert crop from hdr to ldr. So why you use 'genLDRimage(..)' in your 'gen_hdr_crops.py'(a Repeat the operation)?

code:

crop = extractImage(envmap_data.data, [elevation, azimuth], cropHeight, vfov=vfov, output_width = cropWidth)
_, reexpose_scale_factor = genLDRimage(crop, putMedianIntensityAt=0.45, returnIntensityMultiplier=True, gamma=gamma)
# save the cropped envmap
imwrite(os.path.join(output_folder, os.path.basename(input_file)), crop * reexpose_scale_factor)

Hope for your reply!

jxl0131 avatar Oct 17 '22 13:10 jxl0131

Hi!

re-exposing is not the same as tonemap: re-expose simply maps the range of a given HDR image to another range, which makes the images more similar. I do that since the dataset has some pretty dark HDR images and some super bright, so I just apply this (invertible) operation to make them be in a similar range. So this script is for that, generate (re-exposed) HDR crops :)

weberhen avatar Oct 17 '22 19:10 weberhen

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Hi! Could you share the depth maps from the Laval HDR dataset? I find the download link (http://indoor.hdrdb.com/UlavalHDR-depth.tar.gz) is broken. Thanks!

AplusX avatar Nov 02 '22 08:11 AplusX