audio2photoreal
audio2photoreal copied to clipboard
Suggestion: Ability to set background colour
Hi, Just pondering how the code could be adapted to introduce a background colour, for something like chroma-keying the result (greenscreen).
Thanks!
Hi Chris! Thanks for the suggestion!
In case this helps to unblock you, this is possible to do by passing in a reference image (e.g. if it's green or some image background) into this function: https://github.com/facebookresearch/audio2photoreal/blob/aa5803feb0b3e464ff350249c56178ea78b7a325/visualize/ca_body/models/mesh_vae_drivable.py#L285
for instance, if you wanted to add an image background, it would be something like this:
tmp = cv2.imread(f"<path/to/image>")
image_bg = th.zeros_like(tex_rec)
image_bg[:, :3, ...] = (
th.from_numpy(tmp.transpose(2, 0, 1)).float().cuda().unsqueeze(0)
)
or if you wanted to make it a solid color, just fill image_bg with some RGB color. There will be some color calibration steps you will have to do to get it to match green (like inverting this white balancing function: https://github.com/facebookresearch/audio2photoreal/blob/aa5803feb0b3e464ff350249c56178ea78b7a325/visualize/ca_body/utils/image.py#L19) but this should be the general idea. Hope this helps for the time being!
tmp = cv2.imread(f"<path/to/image>") image_bg = th.zeros_like(tex_rec) image_bg[:, :3, ...] = ( th.from_numpy(tmp.transpose(2, 0, 1)).float().cuda().unsqueeze(0) )
hi, @evonneng thanks for the explain.
I'd like to know what the tex_rec variable in your hint code means? How should I pass the parameters?
I actually want to replace the animated characters in the demo with images of myself, is it possible?
You unfortunately can not change identity of the avatars. Only the four pre-trained models are available.
You unfortunately can not change identity of the avatars. Only the four pre-trained models are available.
thanks a lot, and If I want change identity of the avatars, what should I can do ?
Closed as the guys above probably want to create new github issues to track