ganimation_replicate
ganimation_replicate copied to clipboard
My own data test is very poor
I use my own data for testing. The results show the following strange situation, I would like to ask what is the situation?

Hi @pengweixiang , did you load the pretrained weights provided by this project? If yes, how did you crop the face and extract AU vectors? As mentioned here, this project uses face_recognition to extract face bounding box and Openface to obtain AU vectors. Kindly follow the same settings if you want to test on your own dataset.
Hi @pengweixiang , did you load the pretrained weights provided by this project? If yes, how did you crop the face and extract AU vectors? As mentioned here, this project uses face_recognition to extract face bounding box and Openface to obtain AU vectors. Kindly follow the same settings if you want to test on your own dataset.
Except not used face_recognition to extract face bounding box. Can you trouble this part of the code for my reference, thank you very much!
@pengweixiang , the main function goes as below. Note that you need to install face_recognition package first.
import face_recognition
from PIL import Image
def crop_face(img_path, size=(128, 128)):
face_im = face_recognition.load_image_file(img_path)
bboxs = face_recognition.face_locations(face_im)
im = None
if len(bboxs) > 0:
im = Image.fromarray(face_im)
bbox = bboxs[0]
im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2]))
im.thumbnail(size, Image.ANTIALIAS)
return im
@pengweixiang , the main function goes as below. Note that you need to install face_recognition package first.
import face_recognition from PIL import Image def crop_face(img_path, size=(128, 128)): face_im = face_recognition.load_image_file(img_path) bboxs = face_recognition.face_locations(face_im) im = None if len(bboxs) > 0: im = Image.fromarray(face_im) bbox = bboxs[0] im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2])) im.thumbnail(size, Image.ANTIALIAS) return im
Thank you very much for your answer, I will try first.
Did not work, the problem still exists. very strange.

Maybe you can try to train the model with your own dataset?
Maybe you can try to train the model with your own dataset?
I suspect that it is an openface issue. I used the data you provided to re-generate the expression expression with the 2.0.5 version and compare the two values to find a certain deviation. After retesting, the effect was also worse. I want to ask if you are using that version of openface?
The version of OpenFace I used for this project is 2.0.4.
The version of OpenFace I used for this project is 2.0.4.
I can't search this version online. Can you provide related connections? I want to retrain the model here, using the data you provide to train a model that has the same effect as you provided? Is there a requirement for training parameters?
The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.
The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.
The training results are not effective. . . It seems that luck is very important.
The source code of OpenFace v2.0.4 can be downloaded from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4, and refer to https://github.com/TadasBaltrusaitis/OpenFace/wiki/Unix-Installation for the installation guide. Good luck.
The result is like this, I don’t know why, ask for advice.
Sorry but I don't have too much insight on your codes, settings or experiment environment, so I'm afraid I can't provide any effective suggestion for your case. But if you use the code and dataset provided by this project, it should be able to yield the similar results as shown in the READMD. I trained and tested it on several different machines before, and they all worked out fine.
I am testing in google colab and trying to get this setup so the environment will not matter, I am still having trouble with action units, if anyone is interested in setting this project up to train and test in google colab feel free to contact me and then we should all be getting the same results @donydchen I can share my google colab notebook or do you plan to set this up in google colab to make it easy to reproduce and use in any environment?
Hi @ak9250 , many thanks for your suggestion. However, I'm busy doing research on other topic these days, so I'm afraid I don't have time to update the project for the time being. For extracting Action Units, you can check out https://github.com/donydchen/ran_replicate/blob/master/tools/extract_au.py for some reference. For using other dataset, you'll need to create a specific dataset Class by inheriting the base_dataset.py. Basically, you can just copy celeba.py and modify a few lines of code to adapt to your own dataset. Then call your dataset class in data_loader.py. Hope it help.
Sorry but I don't have too much insight on your codes, settings or experiment environment, so I'm afraid I can't provide any effective suggestion for your case. But if you use the code and dataset provided by this project, it should be able to yield the similar results as shown in the READMD. I trained and tested it on several different machines before, and they all worked out fine.
Found the problem. Successfully achieved. thank you very much
@pengweixiang hi ,I just want to know the detail of how do you fix the problem? MIne just din't work after fine-tune on the dataset of celebA(it does not change at all) I do not know why? thank you
Hi. I used celeba data, face_recognition package, openface to test the consistency of aus values. I found that the way of alignment really affects those values. This method 'im.thumbnail(size, Image.ANTIALIAS)' returns(not 'return' actually) a image with its height and width smaller than 128, and happens to be a patch of the corresponding cropped face image that the author provide. Maybe is there any padding tricks or does the version of face_recognition package matter?
Hi, @plutoyuxie, im.thumbnail is self-explanatory and it aims to down sample a given image. That means, for an input image, if its size is larger than 128x128, it will be down sampled to 128x128, while if it is smaller than 128x128, the original size will be retained.
If you'd like to make sure the size of an image is resized to 128x128, kindly check im.resize.
Note that before feeding to the training network, an image will always be resized to a specific shape, e.g. 128x128. So the size of images in the pre-processing may not really matter.
Hi, @donydchen When I use size=(128,128) as parameters, 'im.thumbnail' makes an image size equal to (107, 108) (sth like this). So I'm confused. I want to test my own data, but the way of face alignment is defferent (after using im.thumbnail and im.resize, my handmade celeba image is different from yours.), so the results turns to be not that good.
I fix my preprocessing problem, just delete the line 'im = im.crop((bbox[3], bbox[0], bbox[1], bbox[2]))'. Then I get the same cropped face image as the author gives.
@pengweixiang hi ,I just want to know the detail of how do you fix the problem? MIne just din't work after fine-tune on the dataset of celebA(it does not change at all) I do not know why? thank you
me too
hi @pengweixiang,I just want to know the detail of how do you fix the problem? MIne just din't work after fine-tune on the dataset of celebA(it does not change at all) I do not know why? thank you
hi @pengweixiang,I just want to know the detail of how do you fix the problem? MIne just din't work after fine-tune on the dataset of celebA(it does not change at all) I do not know why? thank you
See if the default setting of your dataset is ‘none’. If yes, set resize
每个系统里面生成的表情au参数都不一样,所以你需要重新训练调整,不能直接使用demo
发自我的iPhone
------------------ Original ------------------ From: AndyWang <[email protected]> Date: Sun,Apr 5,2020 10:13 PM To: donydchen/ganimation_replicate <[email protected]> Cc: pengweixiang <[email protected]>, Mention <[email protected]> Subject: Re: [donydchen/ganimation_replicate] My own data test is very poor (#5)
@pengweixiang @donydchen same problem, how is it solved? Thank you
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
嗨@ak9250,非常感谢您的建议。不过这几天忙着研究其他课题,恐怕暂时没有时间更新项目。 对于提取动作单元,您可以查看https://github.com/donydchen/ran_replicate/blob/master/tools/extract_au.py以获得一些参考。 要使用其他数据集,您需要通过继承base_dataset.py创建一个特定的数据集类。基本上,您只需复制celeba.py并修改几行代码即可适应您自己的数据集。然后在data_loader.py中调用您的数据集类。 希望它有所帮助。
Can you provide OPENFACE code to extract action units
每个系统里面生成的表情au参数都不一样,所以你需要重新训练调整,不能直接使用demo 发自我的iPhone … ------------------ Original ------------------ From: AndyWang <[email protected]> Date: Sun,Apr 5,2020 10:13 PM To: donydchen/ganimation_replicate <[email protected]> Cc: pengweixiang <[email protected]>, Mention <[email protected]> Subject: Re: [donydchen/ganimation_replicate] My own data test is very poor (#5) @pengweixiang @donydchen same problem, how is it solved? Thank you — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
这个au的值如何获取 可以提供openface的代码吗 感谢
Did not work, the problem still exists. very strange.
![]()
I meet the same problem in testing. I success in celeba from gdrive, but fail in my own dataset. And I find a way to figure it out. The reason is that I use the error parameter extracted by Openface. Here is my way:
- Download and install Openface from https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.0.4. Crop images to 128x128.
- Extract the AUs with the command: ./build/bin/FaceLandmarkImg -fdir ../val_set/img_128/ -out_dir ../val_set/aus/ -aus
- Use the code https://github.com/albertpumarola/GANimation/blob/master/data/prepare_au_annotations.py to extract [2: 19] as the readme said. If we don't add '-aus' in step 2, we will get wrong AUs parameters here.
- Prepare the dataset as celeba dataset in gdrive and test.
Here is my result:

The work is interesting and the pre-trained weights are helpful. Thanks!