keras-deeplab-v3-plus
keras-deeplab-v3-plus copied to clipboard
How I can run this project?
I run extract_weights.py then load_weights.py and then I run model.py but I did not recieve any result? Please help me! How I can run this and recieve result
What result are you talking about? After running load_weights.py you get model .h5 file which can be used for segmentation. In fact, you can simply load weights while defining Deeplab model.
the result like this https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/imgs/seg_results2.png
Initiate a model, then use model.predict on preprocessed image (scale pad, divide by 127.5, substract 1)
I initiate a model like this:
deeplab_model = Deeplabv3(input_shape=(512,512,3), classes = 4, weights='pascal_voc', OS=8)
and then use
deeplab_model.predict(x)
which x is a prepossessed image. And the return is a array with shape (1, 512, 512, 4).
How can I generate a segmentation map image from this array?
hi, why I should be trained with OS=16, and only inferenced with OS=8? can not be trained in os=8?
@zyfsa The model is too big, with OS=8 your batch size is going to be 6, or even less, and training will take too long In original paper they also train with OS=16
The best solution is to start only with xception weights freezed to not reduce the batch size too much
Initiate a model, then use model.predict on preprocessed image (scale pad, divide by 255, substract 1) @bonlime should be divide by 127.5, substract 1?
@lsymuyu you are right! Edited my comment
Have u trainned on your own data? I tried to train it using my own dataset with only two classes. green_model = Deeplab_model.Deeplabv3(input_shape=(521,521,3), classes=2, weights='pascal_voc', OS=16)
#green_model.summary() # I freze all except the last layer? This is OK? for layers in green_model.layers[:-1]: layers.trainable = False
def pixelwise_crossentropy(target, output): output = tf.clip_by_value(output, 10e-8, 1. - 10e-8) return - tf.reduce_sum(target * tf.log(output))
green_model.compile(loss = pixelwise_crossentropy, optimizer = optimizers.SGD(lr=0.1, momentum=0.9), metrics=['accuracy'])
However the estimation is not good at all...any suggestions ? Thanks in advance.
I've tried to train this model with Data Science Bowl data, but results also were bad. I will try to obtain good results in the future. Try freezing first 356 layers, and all higher BatchNorms layers
Thanks. Except Deeplab, any other semantic segmentation models worth to try?
hello, I also want to train the model on our own dataset. how to make the model can work on the image with different H,W,like(384,512,3). Now, the code only handle the image,like (384,384),(512,512).thank you
@zyfsa I reccomend padding your images with zeros to make height==width
HI,guys! How to train this model? I am not familiar with keras, but I have learnt tensorflow. Could u give me some ideas about training? Thanks!!!
@ClaireTun you can train it like any other keras model
This repo has a good exapmle of training on Pascal VOC dataset:
https://github.com/nicolov/segmentation_keras
@bonlime By now this issue maybe can get closed since it is explained in the readme.md? https://github.com/bonlime/keras-deeplab-v3-plus#how-to-get-labels
Even though I would not call it "how to get labels" but rather "running the network" or "running inference".
I've made a PR that contains an example script to run the model. I am new to Deep Learning and any feedback is welcome.
https://github.com/bonlime/keras-deeplab-v3-plus/pull/90
I'm trying to start a project that provides more of a Quickstart to running this model. It should run with a simple pip install and no other modifications. It provides a CLI and a Dockerized method to run it. It uses TF 2.0 Alpha and Python 3.7. You can check it out here: https://github.com/sachsbl/segmental
I initiate a model like this:
deeplab_model = Deeplabv3(input_shape=(512,512,3), classes = 4, weights='pascal_voc', OS=8)
and then usedeeplab_model.predict(x)
which x is a prepossessed image. And the return is a array with shape (1, 512, 512, 4).How can I generate a segmentation map image from this array?
Why do I get this error?
TypeError Traceback (most recent call last)
16 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py in call(self, shape, dtype, partition_info) 497 scale /= max(1., fan_out) 498 else: --> 499 scale /= max(1., (fan_in + fan_out) / 2.) 500 if self.distribution == "normal" or self.distribution == "truncated_normal": 501 # constant taken from scipy.stats.truncnorm.std(a=-2, b=2, loc=0., scale=1.)
TypeError: unsupported operand type(s) for /: 'Dimension' and 'float'
Probably you are on Tensorflow 1.13, try 2.0 BETA