VIBE
VIBE copied to clipboard
[FEATURE] How can i modified the demo.py to inferece the camera?
I want to analyze the video data of the camera in real time.Then How do I revise the demo.py to fit this mission?
Hi @JiangWeiHn, Did you get any way for realtime camera inference? Please share with me in case you got something.
Hi @JiangWeiHn, Did you get any way for realtime camera inference? Please share with me in case you got something.
I have try import multiprocess in the demo.py,but it is still run slowly,There's a delay of about three to four seconds。I check the process,the rendering program takes a lot of time.Up to now , I have no idea to improve this part of program
I think SPIN is a better fit for this application as it operates frame level. SPIN delay will be the processing time of the backbone of a single image, not of sequence of frames
What if we don't render it on SMPL mesh and just extract the corresponding 3D joints to animate a predefined skeleton or a rig (simple edge-based skeleton) in realtime. I think it would take less time and also useful for animated file generation that can be used in any 3D tools like blender.
Most of the compute time per frame is spent in the backbone (resnet50 in HMR), therefore removing the mesh generation (in SMPL model) is not likely to get a significant speedup. The fact that VIBE uses multiple frames (GRU over tone) is probably the major latency factor compared to SPIN.
thanks a lot ,I will try SPIN.if there who got something, please sharing us in here
I have opened a pr for inferencing from camera using VIBE, and it works well on my system(gtx1060 6gb). Inference speed is roughly 15 frames per second for sequence length of 4 and yolo_img_size of 256. Change in sequence length doesn't change the speed a lot since only the encoder and regressor part needs to be done on the whole sequence. Though this is without the rendered results being displayed in real-time(It's way too slow), but almost all other features are intact. Do let me know if any other changes are needed.
Hope it Helps. #40
@Pranjal2041 Please explain how you launched the project online? I could not find flags or any other instructions
Hi @EvgeniaKomleva, I had created a new file 'live_inference.py.' You can run that for online inference.