magic-animate icon indicating copy to clipboard operation
magic-animate copied to clipboard

when will your densepose extractor be open-sourced

Open yyyouy opened this issue 1 year ago • 8 comments

Thanks for your excellent work. I would like to ask, when will your densepose extractor be open-sourced? I used the densepose extractor in Simple Magic Animate and the evaluation results (fid, fvd) on the TED-Talk dataset are quite different from the results you published.

I wanted to inquire if the difference is due to the densepose extraction itself.

yyyouy avatar Jan 03 '24 10:01 yyyouy

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

zcxu-eric avatar Jan 03 '24 13:01 zcxu-eric

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

Delicious-Bitter-Melon avatar Jan 03 '24 13:01 Delicious-Bitter-Melon

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

May I ask if the results mentioned in your paper regarding the TED-Talk dataset were obtained by training on the TED-Talk training set and then testing on the TED-Talk test set? And I am also interested in the color discrepancy between the background of the densepose results extracted by Detectron2, which appear as black, and the ones presented by your team, which appear as purple?

yyyouy avatar Jan 03 '24 14:01 yyyouy

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

May I ask if the results mentioned in your paper regarding the TED-Talk dataset were obtained by training on the TED-Talk training set and then testing on the TED-Talk test set? And I am also interested in the color discrepancy between the background of the densepose results extracted by Detectron2, which appear as black, and the ones presented by your team, which appear as purple?

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

zcxu-eric avatar Jan 04 '24 06:01 zcxu-eric

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

Thank you very much. Do you have a plan to release this checkpoint?

yyyouy avatar Jan 04 '24 07:01 yyyouy

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

We have been utilizing the DensePose with the following command:

python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl image_path dp_segm -v

Furthermore, we have experimented with various visualizers listed below:

"dp_contour": DensePoseResultsContourVisualizer, "dp_segm": DensePoseResultsFineSegmentationVisualizer, "dp_u": DensePoseResultsUVisualizer, "dp_v": DensePoseResultsVVisualizer, "dp_iuv_texture": DensePoseResultsVisualizerWithTexture, "dp_cse_texture": DensePoseOutputsTextureVisualizer, "dp_vertex": DensePoseOutputsVertexVisualizer, "bbox": ScoredBoundingBoxVisualizer,

However, we noticed an issue where the background appears black instead of purple. Could you possibly shed light on why this might be happening?

yyyouy avatar Jan 04 '24 07:01 yyyouy

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

zcxu-eric avatar Jan 04 '24 07:01 zcxu-eric

Yes, it was trained on TED-Talk. The detectron2 has different visualizers, we use the semantic map one, and its background is purple.

We have been utilizing the DensePose with the following command:

python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml https://dl.fbaipublicfiles.com/densepose/densepose_rcnn_R_50_FPN_s1x/165712039/model_final_162be9.pkl image_path dp_segm -v

Furthermore, we have experimented with various visualizers listed below:

"dp_contour": DensePoseResultsContourVisualizer, "dp_segm": DensePoseResultsFineSegmentationVisualizer, "dp_u": DensePoseResultsUVisualizer, "dp_v": DensePoseResultsVVisualizer, "dp_iuv_texture": DensePoseResultsVisualizerWithTexture, "dp_cse_texture": DensePoseOutputsTextureVisualizer, "dp_vertex": DensePoseOutputsVertexVisualizer, "bbox": ScoredBoundingBoxVisualizer,

However, we noticed an issue where the background appears black instead of purple. Could you possibly shed light on why this might be happening?

plz use "dp_segm" and change the black background to a canvas filled with RGB: (84, 1, 68).

zcxu-eric avatar Jan 04 '24 07:01 zcxu-eric

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

Thanks for your reply. Do you compute FID between 100 generated images and the corresponding 100 real images for each video, and then average it over all videos? Or directly compute FID between all generated images (100 x the number of videos) and real images (100 x the number of videos)?

Delicious-Bitter-Melon avatar Jan 07 '24 11:01 Delicious-Bitter-Melon

Hi, we used detectron2 library to estimate densepose, and the checkpoint we released was not trained on TED-Talk dataset.

I am also interested in the pre-trained model trained on the TED-Talk dataset. Do you have a plan to release this checkpoint? Thank you very much.

yes, we will release this ckpt

I am also looking forward this ckpt.

Worromots avatar Jan 17 '24 02:01 Worromots