facexlib icon indicating copy to clipboard operation
facexlib copied to clipboard

Feature Request: offload memory usage on storage

Open aont opened this issue 2 years ago • 0 comments

I used Real-ESRGAN on Google Colab and wanted to enlarge the image 16x. I tried the tile option, but it inevitably crashed in the process. I examined the code and found that the paste_faces_to_input_image in facexlib's face_restoration_helper.py was allocating a huge amount of memory. So I thought of offloading memory usage on storage and implemented it using numpy's memmap and numexpr. This greatly reduced the memory consumption and made it possible to zoom in 16x on the image with Google Colab.

Would you like to incorporate this feature if you like?

--- face_restoration_helper_a.py
+++ face_restoration_helper_b.py
@@ -279,6 +279,7 @@
         self.restored_faces.append(face)
 
     def paste_faces_to_input_image(self, save_path=None, upsample_img=None):
+        import numexpr as ne
         h, w, _ = self.input_img.shape
         h_up, w_up = int(h * self.upscale_factor), int(w * self.upscale_factor)
 
@@ -288,6 +289,12 @@
         else:
             upsample_img = cv2.resize(upsample_img, (w_up, h_up), interpolation=cv2.INTER_LANCZOS4)
 
+        upsample_img_orig = upsample_img
+        upsample_img_memmap_fn = "/content/upsample_img.npy"
+        upsample_img = np.memmap(upsample_img_memmap_fn, shape=upsample_img.shape, dtype="float64", mode="w+")
+        upsample_img[:] = upsample_img_orig
+        del upsample_img_orig
+
         assert len(self.restored_faces) == len(
             self.inverse_affine_matrices), ('length of restored_faces and affine_matrices are different.')
         for restored_face, inverse_affine in zip(self.restored_faces, self.inverse_affine_matrices):
@@ -352,12 +359,14 @@
                 upsample_img = inv_soft_mask * pasted_face + (1 - inv_soft_mask) * upsample_img[:, :, 0:3]
                 upsample_img = np.concatenate((upsample_img, alpha), axis=2)
             else:
-                upsample_img = inv_soft_mask * pasted_face + (1 - inv_soft_mask) * upsample_img
+                ne.evaluate("inv_soft_mask * pasted_face + (1 - inv_soft_mask) * upsample_img", out=upsample_img)
 
         if np.max(upsample_img) > 256:  # 16-bit image
             upsample_img = upsample_img.astype(np.uint16)
         else:
             upsample_img = upsample_img.astype(np.uint8)
+        import os
+        os.unlink(upsample_img_memmap_fn)
         if save_path is not None:
             path = os.path.splitext(save_path)[0]
             save_path = f'{path}.{self.save_ext}'

aont avatar Jan 06 '23 13:01 aont