CodeFormer icon indicating copy to clipboard operation
CodeFormer copied to clipboard

Sharpness of the restored face

Open andyderuyter opened this issue 3 years ago • 14 comments
trafficstars

Is there a way for the fixed face to be sharper in the results? As you can see in the fixed result, the transition to the sharp hair on the top of her head is pretty harsh and the overall sharpness of the face is greatly reduced compared to the sharpness of the input image.

Input:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

Fixed with CodeFormer. This is with 0.5 fidelity and with background upscale on:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

Sharp area (top) going to unsharp area below (face fixed):

Screenshot at Sept 02 08-48-06

Thanks for looking into this :)

andyderuyter avatar Sep 02 '22 06:09 andyderuyter

I can confirm. The result is less sharp. in the restored area.

MarcusAdams avatar Sep 02 '22 07:09 MarcusAdams

The input created image has a large resolution of 1536x1024 which is beyond our model with an output face of 512. Since the face restoration models are originally designed for the restoration of real low-quality faces, which usually have a very lower resolution than 512. Thus most models fixed the input and output resolution of 512, our CodeFormer is the same.

This is why the result is less sharp when inferencing an image with a very larger resolution than 512.

sczhou avatar Sep 02 '22 07:09 sczhou

Ok, so downscaling the image first to around 512px (max width) is an option then?

andyderuyter avatar Sep 02 '22 07:09 andyderuyter

I just tried with a resized image of the same picture. 512px wide. This is the result:

SinCity_Mulan_full_body_Disney_princess_from_Mulan_character_be_727fc4b2-be79-44be-9b3c-f7cd8f1877e5

It remains unsharp in the restored face.

andyderuyter avatar Sep 02 '22 08:09 andyderuyter

The whole dataset was 512*512 you'll only ever get an output that resolves a certain level of detail/sharpness as a result of that. I'm not sure if the Devs ever plan on releasing a higher resolution model but that will require substantially more vram potentially. Bear in mind that we are using CodeFormer outside of its original intended purpose when using it for AI art, the optimisations made were a reaction to good community feedback in relation to results. Would I want sharper results too? Sure but you have to be realistic about the tools at hand at the same time.

kalkal11 avatar Sep 02 '22 11:09 kalkal11

Don't know if it's possible, as I'm not fluent in Python... But how about some sharpening levels (kind of like the fidelity slider) that happen after the face restoration (only on the part that is restored) and before that restored part is being pasted back onto the image?

PS: I do appreciate the answers and feedback, thanks for that! :)

andyderuyter avatar Sep 02 '22 12:09 andyderuyter

Hi all @andyderuyter @MarcusAdams @caacoe, I add the face upsampling '--face_upsample' option for high-resolution AI-created faces. Please have a try! e.g., python inference_codeformer.py --w 0.7 --test_path inputs/user_upload --bg_upsampler realesrgan --face_upsample

sczhou avatar Sep 04 '22 08:09 sczhou

The result of using --face_upsample

0000-up

sczhou avatar Sep 04 '22 08:09 sczhou

@sczhou , nice work. I look forward to trying it out! Thank you so much!

MarcusAdams-v006200 avatar Sep 04 '22 18:09 MarcusAdams-v006200

@sczhou well that was certainly a 'hold my beer' moment. Thank you!

kalkal11 avatar Sep 04 '22 20:09 kalkal11

@sczhou Thanks, this was much needed and the AI community will also be thankful as well!

andyderuyter avatar Sep 04 '22 21:09 andyderuyter

@sczhou I'm not seeing a difference. I tried --face_upsample with weight 1.0 and weight 7.0, with both --upscale 1 and --upscale 2, but I can't discern a difference between the two images. I even tried with reducing the size of the images first. I wonder if the right code got checked in. These are weight 0.7 with --upscale 2, first no --face_upsample, then with: Small_CodeFormer_0 7_2x Small_CodeFormer_0 7_2x_upsampled

MarcusAdams avatar Sep 05 '22 00:09 MarcusAdams

@sczhou I'm not seeing a difference. I tried --face_upsample with weight 1.0 and weight 7.0, with both --upscale 1 and --upscale 2, but I can't discern a difference between the two images. I even tried with reducing the size of the images first. I wonder if the right code got checked in. These are weight 0.7 with --upscale 2, first no --face_upsample, then with:

~~Hi, please make sure you used --face_upsample and --bg_upsampler realesrgan together in the command, since the face upsampler is initialed by the same realesrgan of background image.~~


Update:

  • --face_upsample can be used solely now.

sczhou avatar Sep 05 '22 05:09 sczhou

--face_upsample can be used solely now.

@sczhou deserves a Nobel peace prize for this. It changes the human history.

Most AI arts today are in high resolution. Being able to upscale only the face without affecting the background resolution, is crucial and critical.

It's December 2023 now. Why have not the other players, namely GFPGAN, GPEN, RestoreFormer, come up with this brilliant idea?

Why @sczhou is the only person in the AI community offering this cutting edge piece of tech?

TechVillain avatar Dec 11 '23 23:12 TechVillain