Deep-Image-Matting
Deep-Image-Matting copied to clipboard
How can we run demo without Adobe training set?
Hi,
Apparently the original image training sets is not available to anyone not associated with a university. Is there a way to run the demo at least without it. You have graciously provided the trained model so why is the training data needed for processing new images?
The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".
Does the image need to be 320X320? Can I run the code with the original image size?
you could .just modify demo.py
I am recreating the experiment, some questions I hope to be advised, this is my contact information. my address [email protected] , [email protected] ,or QQ 1820641671
perhaps you can modify the input shape from (320, 320, 4) to (None, None, 4) in test?
@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error: ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.
@ahsanbarkati Where does the error come from? Is it in the shape computation from a layer?
You can look into the complete error log here: https://pastebin.com/2Pw1mVF2 The error is in this line: origReshaped = Reshape(shape)(orig_5)
The training data is not needed and you only have to provide an input RGB image, a trimap (320x320) and a trained model. You can modify the corresponding codes in "demo.py".
I have the image and trimap and also downloaded the pre-trained model. But I have no clue as to how to modify the demo.py to make it work for my own test. Also, like other people said, it will be nice to be able to change the image size as well. Can you provide another demo code for us to test when we don't have the Adobe datasets, a lot of us don't have it anyways. Thanks!
You have to change input shape of network and train the model and the height and widht of input shape should be equal.
If you need a larger size you are probably best to provide it in patches and then reconstruct it. There are some pre-existing python modules to do so (https://github.com/adamrehn/slidingwindow for example) but it's pretty easy to write the loop to do it.
@rainsun1 I faced a similar issue and I tried to modify input shape from (320,320,4) to (None,None,4) in models.py. But I get the following error: ValueError: Tried to convert 'shape' to a tensor and failed. Error: None values not supported.
If input is None, maybe you should use tf.shape[index] to compute, not tf.get_shape().as_list()
You can also construct the model with a large shape such as: image_size = (800, 800) input_shape = image_size + (4, ) model = build_encoder_decoder(shapeInput=input_shape) after model prediction, you can crop the result into the original size of the image.