Deep-Live-Cam icon indicating copy to clipboard operation
Deep-Live-Cam copied to clipboard

feat: Implement hair swapping and enhance realism

Open rehanbgmi opened this issue 6 months ago • 7 comments

This commit introduces the capability to swap hair along with the face from a source image to a target image/video or live webcam feed.

Key changes include:

  1. Hair Segmentation:

    • Integrated the isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing model from Hugging Face using the transformers library.
    • Added modules/hair_segmenter.py with a segment_hair function to produce a binary hair mask from an image.
    • Updated requirements.txt with transformers.
  2. Combined Face-Hair Mask:

    • Implemented create_face_and_hair_mask in modules/processors/frame/face_swapper.py to generate a unified mask for both face (from landmarks) and segmented hair from the source image.
  3. Enhanced Swapping Logic:

    • Modified swap_face and related processing functions (process_frame, process_frame_v2, process_frames, process_image) to utilize the full source image (source_frame_full).
    • The swap_face function now performs the standard face swap and then:
      • Segments hair from the source_frame_full.
      • Warps the hair and its mask to the target face's position using an affine transformation estimated from facial landmarks. - Applies color correction (apply_color_transfer) to the warped hair. - Blends the hair onto the target frame, preferably using cv2.seamlessClone for improved realism.
    • Existing mouth mask logic is preserved and applied to the final composited frame.
  4. Webcam Integration:

    • Updated the webcam processing loop in modules/ui.py (create_webcam_preview) to correctly load and pass the source_frame_full to the frame processors.
    • This enables hair swapping in live webcam mode.
    • Added error handling for source image loading in webcam mode.

This set of changes addresses your request for more realistic face swaps that include hair. Further testing and refinement of blending parameters may be beneficial for optimal results across all scenarios.

Summary by Sourcery

Implement hair swapping and realism enhancements by integrating a Segformer-based hair segmentation model, updating the face swap pipeline to warp, color-correct, and blend hair in addition to faces, and extending this functionality to images, videos, and live webcam feeds.

New Features:

  • Integrate a hair segmentation model to generate hair masks from source images
  • Enable hair swapping alongside face swapping by warping, color-correcting, and blending hair onto targets
  • Extend live webcam mode to support hair swapping using the full source image

Enhancements:

  • Refactor face swapping functions to accept the full source frame and integrate hair blending logic
  • Implement seamlessClone-based blending with affine warping and color transfer for improved realism
  • Update frame processors and UI to correctly load and pass the full source image for hair processing
  • Add create_face_and_hair_mask utility to combine facial landmark masks with segmented hair

Build:

  • Add transformers dependency to requirements.txt

rehanbgmi avatar May 21 '25 18:05 rehanbgmi