SUPER-MARIO
SUPER-MARIO
I already use 256x192
You don't need to train with the DeConv layer, you only need to make the deploy file the similar way as my example.
@deepalianeja Please refer to examples/deconv_deploy.prototxt and examples/unpooling.ipynb for detail
@deepalianeja Well, as for the guided back propagation, I think the example just did the right thing but in an ugly way. Do you also think so?
Hi, @bemoregt , first you should use a [face detector](mmlab.ie.cuhk.edu.hk/archive/CNN/data/code_face.zip) to find bounding boxes, then you can use python to make data as the format. However, since different dataset store...
@bemoregt , of course you can. Thanks for your attention.
@bemoregt ,Hi, you should modify the code so that it can be used to detect 2 points. For example, you should modify the [input pipeline](https://github.com/mariolew/TF-FaceLandmarkDetection/blob/master/libs/tfpipeline.py#L12) to ```python record_defaults = [[""]]...
@bemoregt In your text file /home/vcamp1/Pictures/eyes/1.jpeg 2 177 2 177 67 101 10 10, you have two points. 67 101 10 10, so the reshape can indeed happen, i.e. reshape...
@bemoregt Well, since deep learning requires a lot of data, data augmentation is needed. I've been using my code for various number of landmark detection, so I don't think it's...
@bemoregt Did you modify augment.py? I used the text file like yours and I met no error locally... Hmm, generally 50k ~100k images are enough.