generative_inpainting
generative_inpainting copied to clipboard
Training on DEM SRTM dataset
Thank you for your great work. I have been applying your research to my problem now. Here is my problem:
DEM, stands for Digital Elevation Model, is a numerical matrix with each pixel represents its elevation correspond to a location.
SRTM is a globally DEM dataset. However, with a reservoir or a lake, SRTM showed the elevation of surface water at the time it collected (in 2000). Now I want to recover DEM below surface water of a lake or a reservoir.
Here is my step-by-step process:
-
My training set is 15k DEM images.
-
I generated masks correspond to input image. An input image has a specific mask. My mask is an connected area which has elevation lower than a random number. So it can describe a reservoir or a lake, I think so. Some of my masks with the input:
So I customized your code to read my input:
-
I also customized data_from_fnames in neuralgym toolkit to read '.tif' files. Then I used min-max normalize DEM image to 0-255 in order to pass it to your model.
-
Some first rows in my train.flist:
- Because of limit memory of my GPU, I set image input shape (128, 128, 1) and batch size is 16 as your advice I saw in other issues.
#training
train_spe: 1000
max_iters: 1000000
viz_max_out: 10
val_psteps: 500
static_view_size: 30
img_shapes: [128, 128, 1]
height: 128
width: 128
max_delta_height: 32
max_delta_width: 32
batch_size: 16
vertical_margin: 0
horizontal_margin: 0
- I finally trained more than 70 epoches and get bad results:
My losses are not converging
Some of generated validation images:
Questions
- Are all my customizations correct?
- I saw your sample flist is shuffled and I didn't. Does it affect to the training result?
- Could you give me some suggestion and your views on my problem?
Thanks a lot! <3
Hi htn274 This seems like a nice idea, how did it work out for you? I am trying to implement a similar idea, with non random masks, and it can be very helpful to learn from your experience.
Thanks in advance
@htn274 Thanks for your detailed feedback and that's exactly our community needs! I will keep this issue in the front page.
Regarding your question:
- I think you are right, and the results are good already?
- Shuffling should always be non-worse.
- Larger dataset could help.
Hi there, I'm wondering exactly how you modified the inpaint network to work with this - as I currently see it, the network only works with a mask of size (1, height, width,1), but with your custom masks you're feeding it (batch_size, height, width, 1). I'm trying to do the same thing but getting a "Dimensions must be equal" ValueError. Any guidance?
Ah nevermind, I figured it out! For future reference, there's a lot more modifications than those shown to get custom masks working - but it's not too difficult. My code is a bit crazy right now but if anyone needs an explanation, feel free to ping me.
Hey! I will be happy to hear your explanation. How can I contact you? Efrat ([email protected])
How Exciting! This is almost exactly what I am trying to do!
@htn274 @Nico-Adamo @Efrat-Taig I am eager to hear how you did and exchange some results and other experiences! you can PM me via twitter e.g.
As described in #444 you will have to change the batch size to 1 since one mask always services the whole batch. Also you will have to change the call in build_graph_with_losses
to:
x1, x2, offset_flow = self.build_inpaint_net(
xin, mask, reuse=tf.AUTO_REUSE, training=training,
padding=FLAGS.padding)