gnerf icon indicating copy to clipboard operation
gnerf copied to clipboard

Output GIF is All NONE

Open SimonCK666 opened this issue 2 years ago • 6 comments

Dear Authors, Thanks for the amazing work.

I have some problem for the network result. In the end, the generator and discriminator look like convergence. But the result gif is all NONE. (rgb gif is all white, depth gif is all Black) I am so confuse about that.

What I have changed before training this network is only changed the batch size from 12 to 6. Does this change make this err? :)

ps: I use the blender drums dataset for training. one GPU 2080Ti Training for almost 4 days

Thanks for your help.

SimonCK666 avatar Mar 30 '22 03:03 SimonCK666

Changing the batch size has a certain effect on the stability of GAN

If the result is empty, it should mean that GAN training has failed Generally the loss of generator and discriminator are around 1. If the training fails at the beginning, I will rerun it.

Is the loss of discriminator in your training close to 0?

quan-meng avatar Mar 30 '22 06:03 quan-meng

Nope

Acturally in the end of training, the loss of discriminator was almost converged to 0.5, and it looked like normal.

All facts show that the camera pose and generator training are fine. It's just like when doing volume rendering, the rays all missed the drum object.

Today morning, I retried for the dataset:hotdog/. Also keep the batch_size was 6. In this time, the model seems work all good.

So the problem seems just happened on the dataset:drums/

SimonCK666 avatar Mar 30 '22 07:03 SimonCK666

Sorry to bother you again,

I found that I cannot train this model successfully unless using the hotdog data.

Do I need to change these params blow, when I use other data to train? Such as drums or lego data. I found when I change data, the camera pose estimation always gather.

(I put the camera pose estimation when trained lego below)

azim_range: [ 0., 360. ]  # the range of azimuth
elev_range: [ 0., 90. ]   # the range of elevation
radius: [ 4.0, 4.0 ]  # the range of radius

near: 2.0
far: 6.0

image

SimonCK666 avatar Apr 03 '22 05:04 SimonCK666

You don't need to change the params in a same dataset, e.g., Blender (six scenes with upper hemisphere distribution), DTU

I will check it again in my computer with the default setting.

quan-meng avatar Apr 03 '22 06:04 quan-meng

You can attach the screenshot of your training curve in tensorboard

so I can better analyze the problem based on my experience

quan-meng avatar Apr 03 '22 06:04 quan-meng

I partially tested the code on chair and drums scenes, the estimated camera parameters, RGB and Depth maps all look right

A successful training curve should look like this Screen Shot 2022-04-03 at 22 24 43

quan-meng avatar Apr 03 '22 14:04 quan-meng