voice-changer
voice-changer copied to clipboard
Higher CPU usage with 1.5.3.9a
Issue Type
Bug Report
vc client version number
1.5.3.9a
OS
Windows 11
GPU
RTX 2080 TI
Clear setting
no
Sample model
no
Input chunk num
no
Wait for a while
The GUI successfully launched.
read tutorial
no
Extract files to a new folder.
no
Voice Changer type
RVC
Model type
Crepe, Harvest
Situation
I've tried using the latest version, but seems like there is a 5-10% more CPU usage with the same exact settings. I'm using 320 chunk, crepe, and 131,072 extra.
really?
so, open the .bat
and edit
--content_vec_500_onnx_on true -> --content_vec_500_onnx_on false
May helped a bit? Currently riding 25% average, with boucing from 18-25%. On the version 1.5.3.8a, with my settings its around 13-17.5% cpu usage.
Sorry but I have no idea.
確かにそうです
1.5.3.8.a:
1.5.3.9.a:
@l68728
Did you change the .command?
--content_vec_500_onnx_on true -> --content_vec_500_onnx_on false
I noticed something else, which hopefully should nail down the issue a bit more for you. The "res" value, seems to instantly jump too 1k, then 5k, 10k, 50k, 100k, not sure if its an memory leak, or what in one of the F0 Dets. I've tried with and without vec_500_onxx and it still happened. This seem to occurred, or at-least I've noticed it when I exported model as onxx and used that model as onxx. But even switching back to the non onxx file, it still occurred.
If the issue is occurring even when using ONNX switched off, and only using Harvest or Crepe (but not Crepe Full or Crepe Tiny), I don't immediately recall any fixes from v.1.5.3.8a to v.1.5.3.9a that would cause that. So, I really have no clue about it.
Rises in CPU load or response time could be due to a variety of factors, such as updates to other applications or the OS, or the effects of a virus scan, among other things.
In the same environment, is the CPU load still lower when running v.1.5.3.8a compared to v.1.5.3.9a?
By the way, this is the difference. https://github.com/w-okada/voice-changer/compare/v.1.5.3.8a...v.1.5.3.9a
no clue..
Might have something to do with something that I have noticed. In 1.5.3.9a NonF0 Inference Time is still affected by selected f0 method. So it seems that f0 is computed even if the model does not use f0. Maybe some other calculations are also unnecessarily computed?
@w-okada I found part of my issue. For some reason in one of my NonF0 models in the json config. The Value for F0 was set to true. Which explains why the F0 was computed. I dont know how the config got "corrupted" in this way. But I would assume that it had something to do with loading different models in the slot to test them. So I dont think its a common issue
no clue, and new version is released. try the latest version. If problem remains, open new issue. sorry.