Retrieval-based-Voice-Conversion-WebUI icon indicating copy to clipboard operation
Retrieval-based-Voice-Conversion-WebUI copied to clipboard

Reducing VRAM Usage on Inference?

Open MikuAuahDark opened this issue 2 years ago • 2 comments

Hello,

Is it possible to tune down the quality of the inference for less VRAM usage? I'm running RTX 3060 6GB and while I can train with 4 batches on V2 without problem, I can't inference certain audio files due to out-of-memory error. I don't think it's f0 prediciton issue because even with pm, harvest, and crepe f0, out-of-memory error still occurs.

Platform is Windows 11, running natively without WSL2.

MikuAuahDark avatar Jun 29 '23 00:06 MikuAuahDark

https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/config.py#L129 You can try modifing self.gpu_mem <= 4 to self.gpu_mem <= 7 to enforce using low VRAM mode.

RVC-Boss avatar Jun 29 '23 02:06 RVC-Boss

Thanks. The resulting audio has no significant noticeable quality loss too.

Are there any plans on making that as an configurable option?

MikuAuahDark avatar Jun 29 '23 13:06 MikuAuahDark