Deep-Live-Cam
Deep-Live-Cam copied to clipboard
feat: Implement hair swapping and enhance realism
This commit introduces the capability to swap hair along with the face from a source image to a target image/video or live webcam feed.
Key changes include:
-
Hair Segmentation:
- Integrated the
isjackwild/segformer-b0-finetuned-segments-skin-hair-clothingmodel from Hugging Face using thetransformerslibrary. - Added
modules/hair_segmenter.pywith asegment_hairfunction to produce a binary hair mask from an image. - Updated
requirements.txtwithtransformers.
- Integrated the
-
Combined Face-Hair Mask:
- Implemented
create_face_and_hair_maskinmodules/processors/frame/face_swapper.pyto generate a unified mask for both face (from landmarks) and segmented hair from the source image.
- Implemented
-
Enhanced Swapping Logic:
- Modified
swap_faceand related processing functions (process_frame,process_frame_v2,process_frames,process_image) to utilize the full source image (source_frame_full). - The
swap_facefunction now performs the standard face swap and then:- Segments hair from the
source_frame_full. - Warps the hair and its mask to the target face's position using an affine transformation estimated from facial landmarks. - Applies color correction (
apply_color_transfer) to the warped hair. - Blends the hair onto the target frame, preferably usingcv2.seamlessClonefor improved realism.
- Segments hair from the
- Existing mouth mask logic is preserved and applied to the final composited frame.
- Modified
-
Webcam Integration:
- Updated the webcam processing loop in
modules/ui.py(create_webcam_preview) to correctly load and pass thesource_frame_fullto the frame processors. - This enables hair swapping in live webcam mode.
- Added error handling for source image loading in webcam mode.
- Updated the webcam processing loop in
This set of changes addresses your request for more realistic face swaps that include hair. Further testing and refinement of blending parameters may be beneficial for optimal results across all scenarios.
Summary by Sourcery
Implement hair swapping and realism enhancements by integrating a Segformer-based hair segmentation model, updating the face swap pipeline to warp, color-correct, and blend hair in addition to faces, and extending this functionality to images, videos, and live webcam feeds.
New Features:
- Integrate a hair segmentation model to generate hair masks from source images
- Enable hair swapping alongside face swapping by warping, color-correcting, and blending hair onto targets
- Extend live webcam mode to support hair swapping using the full source image
Enhancements:
- Refactor face swapping functions to accept the full source frame and integrate hair blending logic
- Implement seamlessClone-based blending with affine warping and color transfer for improved realism
- Update frame processors and UI to correctly load and pass the full source image for hair processing
- Add create_face_and_hair_mask utility to combine facial landmark masks with segmented hair
Build:
- Add transformers dependency to requirements.txt