LivePortrait icon indicating copy to clipboard operation
LivePortrait copied to clipboard

Unable to retarget eyes simply by left eye or right eye.

Open hoveychen opened this issue 1 year ago • 9 comments

In the current impl, although the detected eye ratios has separates values for left eye and right eye, when invoking the retargeting module, only the driving left eye ratio is used.

Ref code here: https://github.com/KwaiVGI/LivePortrait/blob/54e50986b232fc3f382f20924cdff675c0ce729d/src/live_portrait_wrapper.py#L308

Any plan to add an option to retarget eyes with separate left/right eyes in the driving video? IMO, we may invoke the retargeting module twice, each time with a separated eye. Then merge both delta 3DMM keypoints

hoveychen avatar Jul 17 '24 06:07 hoveychen

mark

ak01user avatar Jul 17 '24 13:07 ak01user

image Tried to modify the codes, and it works as expected.

hoveychen avatar Jul 17 '24 13:07 hoveychen

image Tried to modify the codes, and it works as expected.

could you show your codes modified,please!

ak01user avatar Jul 18 '24 02:07 ak01user

Sure. Since I don't have any docs about the meaning of the landmarks, I guess it would be something similar to https://github.com/anilbas/BFMLandmarks/blob/master/img/21.jpg

            combined_eye_ratio_tensor_left, combined_eye_ratio_tensor_right = (
                self.live_portrait_wrapper.calc_combined_eye_ratio_dual(
                    c_d_eyes_i, source_lmk
                )
            )
            # ∆_eyes,i = R_eyes(x_s; c_s,eyes, c_d,eyes,i)
            eyes_delta_left = self.live_portrait_wrapper.retarget_eye(
                x_s,
                combined_eye_ratio_tensor_left,
            )
            eyes_delta_right = self.live_portrait_wrapper.retarget_eye(
                x_s,
                combined_eye_ratio_tensor_right,
            )

            weights = (
                torch.tensor(
                    [
                        1,
                        1,
                        1,
                        0,
                        0,
                        0,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        1,
                        0.5,
                        0.5,
                        0.5,
                        0,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                    ]
                )
                .view(21, 1)
                .to(device)
            )
            eyes_delta_left = eyes_delta_left.reshape(-1, 3)
            eyes_delta_right = eyes_delta_right.reshape(-1, 3)

            eyes_delta = eyes_delta_left * weights + eyes_delta_right * (1 - weights)

hoveychen avatar Jul 18 '24 02:07 hoveychen

Sure. Since I don't have any docs about the meaning of the landmarks, I guess it would be something similar to https://github.com/anilbas/BFMLandmarks/blob/master/img/21.jpg

            combined_eye_ratio_tensor_left, combined_eye_ratio_tensor_right = (
                self.live_portrait_wrapper.calc_combined_eye_ratio_dual(
                    c_d_eyes_i, source_lmk
                )
            )
            # ∆_eyes,i = R_eyes(x_s; c_s,eyes, c_d,eyes,i)
            eyes_delta_left = self.live_portrait_wrapper.retarget_eye(
                x_s,
                combined_eye_ratio_tensor_left,
            )
            eyes_delta_right = self.live_portrait_wrapper.retarget_eye(
                x_s,
                combined_eye_ratio_tensor_right,
            )

            weights = (
                torch.tensor(
                    [
                        1,
                        1,
                        1,
                        0,
                        0,
                        0,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        1,
                        0.5,
                        0.5,
                        0.5,
                        0,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                        0.5,
                    ]
                )
                .view(21, 1)
                .to(device)
            )
            eyes_delta_left = eyes_delta_left.reshape(-1, 3)
            eyes_delta_right = eyes_delta_right.reshape(-1, 3)

            eyes_delta = eyes_delta_left * weights + eyes_delta_right * (1 - weights)

It worked,very thanks.so we can try more things from editing 21 landmarks.that is much cool.

ak01user avatar Jul 18 '24 06:07 ak01user

hi,I want to edit param_weights,How do you know the value of param_weights? @hoveychen

ak01user avatar Jul 18 '24 07:07 ak01user

Good question. As I mentioned above, I don't know the exactly mapping of the landmarks (which is then used to compose the mask weights). I just guess by trying to set different eye retargeting ratio. I assumed that the index 0-2 is the left eyebrow, 3-5 is the right eyebrow, 11 is the left eyeball, 15 is the right eyeball.

hoveychen avatar Jul 18 '24 11:07 hoveychen

image My previous comment was not correct. Check out this one.

11-13: right eye, 14-16 left eye, 12 and 15 are the eye balls.

hoveychen avatar Jul 22 '24 04:07 hoveychen

image My previous comment was not correct. Check out this one.

11-13: right eye, 14-16 left eye, 12 and 15 are the eye balls.

hello, is this a visualization of the implicit keypoints in the paper?

wangli000 avatar Jun 27 '25 07:06 wangli000