CodeFormer
CodeFormer copied to clipboard
Codeformer load times increasing exponentially when re-using model
Trying to integrate Codeformers with Stable Horde, the image processes and upscales within a couple of seconds first time around, but the load times keep increasing exponentially unless the model is unloaded from vram and reloaded
Script used:
import time import torch import PIL
from codeformer import CodeFormer from nataili.util.logger import logger
model_name = "CodeFormers" logger.init(f"Model: {model_name}", status="Loading") model = CodeFormer( weights=None, upscale=1, ).to(torch.device("cuda")) logger.init_ok(f"Loading {model_name}", status="Success")
image = PIL.Image.open("./01.png").convert("RGB")
for iter in range(10): tick = time.time() results = model(image) logger.init_ok(f"Job Completed. Took {time.time() - tick} seconds", status="Success")
Results:
INIT | Success | Loading CodeFormers INIT | Success | Job Completed. Took 2.1888020038604736 seconds INIT | Success | Job Completed. Took 0.8321239948272705 seconds INIT | Success | Job Completed. Took 1.8098182678222656 seconds INIT | Success | Job Completed. Took 3.298784017562866 seconds INIT | Success | Job Completed. Took 5.511521100997925 seconds INIT | Success | Job Completed. Took 8.553786277770996 seconds INIT | Success | Job Completed. Took 12.261128425598145 seconds INIT | Success | Job Completed. Took 17.00784397125244 seconds INIT | Success | Job Completed. Took 23.05900287628174 seconds INIT | Success | Job Completed. Took 30.801554679870605 seconds
Workaround found - put the model=CodeFormer() command inside the loop and refreshing it each time works...