Deep-Image-Matting icon indicating copy to clipboard operation
Deep-Image-Matting copied to clipboard

How can we run demo without Adobe training set?

Open JerryKurata opened this issue 6 years ago • 13 comments

Hi,

Apparently the original image training sets is not available to anyone not associated with a university. Is there a way to run the demo at least without it. You have graciously provided the trained model so why is the training data needed for processing new images?

JerryKurata avatar Jan 29 '19 21:01 JerryKurata

The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".

rainsun1 avatar Feb 08 '19 04:02 rainsun1

Does the image need to be 320X320? Can I run the code with the original image size?

usalexsantos avatar Feb 14 '19 14:02 usalexsantos

you could .just modify demo.py

HWNHJJ avatar Feb 23 '19 01:02 HWNHJJ

I am recreating the experiment, some questions I hope to be advised, this is my contact information. my address [email protected] , [email protected] ,or QQ 1820641671

HWNHJJ avatar Feb 23 '19 02:02 HWNHJJ

perhaps you can modify the input shape from (320, 320, 4) to (None, None, 4) in test?

rainsun1 avatar Feb 23 '19 10:02 rainsun1

@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error: ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.

ahsanbarkati avatar Mar 19 '19 13:03 ahsanbarkati

@ahsanbarkati Where does the error come from? Is it in the shape computation from a layer?

rainsun1 avatar Mar 20 '19 05:03 rainsun1

You can look into the complete error log here: https://pastebin.com/2Pw1mVF2 The error is in this line: origReshaped = Reshape(shape)(orig_5)

ahsanbarkati avatar Mar 20 '19 05:03 ahsanbarkati

The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".

I have the image and trimap and also downloaded the pre-trained model. But I have no clue as to how to modify the demo.py to make it work for my own test. Also, like other people said, it will be nice to be able to change the image size as well. Can you provide another demo code for us to test when we don't have the Adobe datasets, a lot of us don't have it anyways. Thanks!

yxt132 avatar Apr 26 '19 03:04 yxt132

You have to change input shape of network and train the model and the height and widht of input shape should be equal.

shartoo avatar Apr 26 '19 09:04 shartoo

If you need a larger size you are probably best to provide it in patches and then reconstruct it. There are some pre-existing python modules to do so (https://github.com/adamrehn/slidingwindow for example) but it's pretty easy to write the loop to do it.

peachthiefmedia avatar Jun 21 '19 23:06 peachthiefmedia

@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error: ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.

If input is None, maybe you should use tf.shape[index] to compute, not tf.get_shape().as_list()

FantasyJXF avatar Jul 05 '19 10:07 FantasyJXF

You can also construct the model with a large shape such as: image_size = (800, 800) input_shape = image_size + (4, ) model = build_encoder_decoder(shapeInput=input_shape) after model prediction, you can crop the result into the original size of the image.

rainsun1 avatar Sep 28 '19 13:09 rainsun1