4D-Humans
4D-Humans copied to clipboard
pose_transformer_v2 (BERT style transformer) use while running model on video
In the run tracking demo on videos, the paper mentions that the BERT-style transformer model(pose_transformer_v2) enables future predictions and amodal completion of missing detections within the same framework.
However, in the PHALP.py script, after running the (pose_transformer_v2), its output is deleted at line 260(in PHALP.py), and I can't find the model output/values used anywhere.
Where exactly does the code utilize the pose transformer v2? Is it involved in the rendering process?