QiQi
QiQi
1. Did you apply [Halpe-FullBody](https://github.com/Fang-Haoshu/Halpe-FullBody) to extract 2d keypoints? 2. How to get keypoints thorax and chest ?
Could you please provide an example of latent for testing?
## ❓Question I am not sure if this question is about coremltools or the Core ML Frame work or Xcode. Here is an example to convert a simple pytorch model...
您好,我在复现您的这篇论文,在复现过程中遇到了以下问题: 1. 从数据集Celebrity-Asian和VggFace2中裁剪人脸,存在很多模糊人脸,请问数据集中的模糊数据您是如何处理的,您最终用于训练模型的数据量大概是多少,这部分在论文中没有详细提及; 2. 在**Feature-Level**最后一段中: ``` After the feature-level fusion, we generate Ilow to com- pute auxiliary loss for better disentangling the identity and attributes. Then we use a 4×...
FF++ and DFDC are applied to do evaluation. [1] For FF++, there are 1000 videos in 500 pairs. For each video, 10 frames are evenly sampled, composing 10K frames test...
After preprocessing of HDTF dataset, I got 415 videos. 249 videos (60%) were randomly selected as training set, the others (40%) were test set. The first 1500 frames of each...
In data dir, there are [data_test.txt](https://github.com/sstzal/DiffTalk/blob/b49a607aa39bb41da215870c2b3dd48e9af9f30d/data/data_test.txt) for validation and [data_train.txt](https://github.com/sstzal/DiffTalk/blob/b49a607aa39bb41da215870c2b3dd48e9af9f30d/data/data_train.txt) for training. How did you split dataset? By portrait or by videos? By portrait means persons in training set are...
In [grounded-sam of PREPRO.md](https://github.com/Wangt-CN/DisCo/blob/main/PREPRO.md#grounded-sam), the command is wrong to preprocess images. I think it should be something like [run_local_test.sh](https://github.com/Wangt-CN/DisCo/blob/main/annotator/grounded-sam/run_local_test.sh). run_local_test.sh processes single image, but openpose processes images in directories. What...
Here is a simple python wrapper using SourceRawDataInput and TargetRawDataOutput, hope it helps. https://github.com/quqixun/gpupixel_pywrapper