chainer-gqn icon indicating copy to clipboard operation
chainer-gqn copied to clipboard

Hello,what can i do with this project?

Open fengziyue opened this issue 6 years ago • 18 comments

is this project runnable? can i train the representation network and generation network with the dataset provided by deepmind? or with my own dataset? or i just can inference the model with pre-trained model? @musyoku @ktns

fengziyue avatar Jul 11 '18 07:07 fengziyue

Hi. The goal of this project is to get the same result as this video. https://youtu.be/G-kWNQJ4idw?t=32

musyoku avatar Jul 11 '18 07:07 musyoku

You can

  • create your own dataset
  • train a model
  • generate images

We don't support dataset provided by deepmind.

musyoku avatar Jul 11 '18 07:07 musyoku

This project is currently under development.

musyoku avatar Jul 11 '18 07:07 musyoku

Hello,thanks for your reply! This is a wonderful project! I am also very interested in GQN, but I am not able to implement it. So what can we do with this project now?

I saw you ticked in front of "implement GQN" But did not tick the "implement training loop" So is this project only able to inference but can't train? but how can we inference without training? do you have a pre-trained model? @musyoku

fengziyue avatar Jul 11 '18 07:07 fengziyue

It is currently possible to train the model and inference. I am doing hyperparameter search. (It takes a week to train on GTX 1080)

musyoku avatar Jul 11 '18 08:07 musyoku

Hi @musyoku ! How many iterations can you finish in two weeks with 1080? I just can finish 1 iteration per day with GTX TITAN X(maxwell) and I saw in your code the iteration range is 2*10**6 . Maybe I will never finish the train. My dataset is generated by your “create_dataset.py”, it contains 2 millions samples. And it is around 500GB and stored in HDD(7200rpm) driver. Will it be much quicker with SSD?

fengziyue avatar Jul 15 '18 02:07 fengziyue

I think that SSD is faster than HDD because my training code reads the file for each iteration and generates minibatch. I finished 7 iterations (387,800 steps / 2,000,000) in 5 days on a single 1080. Importantly, I have never finished training, so there may be a bug in the code and not get the same results reported by DeepMind.

musyoku avatar Jul 15 '18 18:07 musyoku

Hello @musyoku will you support maze scenario?

fengziyue avatar Jul 20 '18 07:07 fengziyue

I will not but I want to support if I have time

musyoku avatar Jul 20 '18 07:07 musyoku

Hello @musyoku can you tell me how to compute PIG(predicted information gain) for the maze scenario? It's in the third chapter of GQN's supplementary materials. I can understand the IG but I can't understand the PIG. How could we compute it without the known target observation X? could you give me your email address? I think we can talk in email but not in the Github issues~ Thank you very much~

fengziyue avatar Jul 24 '18 13:07 fengziyue

The PIG is approximated at every point by averaging over 50 samples per heading directions.

PIG is computed by averaging over 50 x that are sampled from the generator network.

x_n ~ g(x|z,y)(z|y) PIG = 1/50 * {IG(x_1, y) + IG(x_2, y) + ... + IG(x_50, y)}

I'm very sorry but I would like to discuss in issues. And my English skill is not good enough to discuss :confused: (I'm using Google translation)

musyoku avatar Jul 24 '18 19:07 musyoku

@musyoku Do you mean we use the generator network to generate an image x_n then input the (x_n, v, r) to the inference network and output the z_mean_q and z_var_q. Input the (v, r) to the prior network and output the z_mean_p and z_var_p.

The IG(x_n, y) is the KL_divergence(z_mean_p, z_var_p, z_mean_q, z_var_q)?

fengziyue avatar Jul 26 '18 02:07 fengziyue

My understanding is

  • Input (v_q, r) to the generator and output x_n. (v_q is a random query viewpoint covering the maze)
  • Input x_n to the inference network and output (z_mean_q, z_var_q, z_mean_p, z_var_p). (Eq. S18-S23)
  • Compute IG (KL divergence) between (z_mean_q, z_var_q) and (z_mean_p, z_var_p). (Eq. S27)

musyoku avatar Jul 26 '18 12:07 musyoku

@musyoku OK Thank you very much! I have another question, I saw your create_dataset.py in the room scenario generates a Cornell box with every wall a solid color( each wall has just one color). I want to add a texture to the floor, how could I implement it? (I have my own picture and want to post it to the floor as a texture map) Thank you again

fengziyue avatar Jul 26 '18 13:07 fengziyue

I am implementing a texture shader. I will add it to this repo.

musyoku avatar Jul 26 '18 13:07 musyoku

ok! thank you~

fengziyue avatar Jul 26 '18 13:07 fengziyue

Hello, @musyoku which license do you choose? Apache, GPL, or MIT?

fengziyue avatar Aug 05 '18 13:08 fengziyue

MIT

musyoku avatar Aug 05 '18 14:08 musyoku