LIP_JPPNet
LIP_JPPNet copied to clipboard
Code repository for Joint Body Parsing & Pose Estimation Network, T-PAMI 2018
Hello, i'm trying to run train_JPPNet-s2.py but i'm getting this error: InvalideArgumentError (see above for traceback): Cannot assign a device for operation 'Tower_0/stridded_slice_4': Could not satisfy explicit device specification '/device:GPU:0'...
 Thanks for your great work, my training loss seems to be strange , should I keep on training or check my labels and pictures?
Hi~ Is there a way to modify the test batchsize? (and gpu number use) thanks!
when i run train_JPPNet-s2.py.i found that the loss don't convergence,so i want to know how to set learning rate when train the joint model or are there some trick about...
I ran the script below using the model provided `evaluate_parsing_JPPNet-s2.py` and get lots of results like this (nearly happen all the time if the person don't face directly at the...
The performance of the pre-training model is slightly worse than that described in the paper, especially the Mean Accuracy. They are the results of using the pre-trained model on the...
How can I run this code in real time using my webcam? Thank you
when I download the LIP_dataset ,I find your train_id in /datasets/lip/list/ is different from LIP_dataset's train_id. Is there a way to transform the two data format? Another question,how do you...
Hello, excuse me. Are this images in the LIP dataset cropped based on the bounding boxes annotated in MSCOCO, or you just annotate in manual?