Describe the bug
I trained the model by Google Colab ,but it can't infer by my computer.
But it not occurs last time.
Traceback (most recent call last):
File "inference_main.py", line 101, in
main()
File "inference_main.py", line 85, in main
out_audio, out_sr = svc_model.infer(spk, tran, raw_path,
File "C:\python\vits4.0\inference\infer_tool.py", line 177, in infer
audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
File "C:\python\vits4.0\models.py", line 409, in infer
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1,2)
File "C:\python\vits4.0\python38\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "C:\python\vits4.0\python38\lib\site-packages\torch\nn\modules\conv.py", line 313, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\python\vits4.0\python38\lib\site-packages\torch\nn\modules\conv.py", line 309, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [192, 768, 5], expected input[1, 256, 1803] to have 768 channels, but got 256 channels instead
To Reproduce
I found that the model has changed.

Additional context
No response
Version
4.0.1
Platform
Windows 10
Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
No Duplicate
- [X] I have checked existing issues to avoid duplicates.
I have the same error but when I want to train.
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
I have the same error,pls help
Traceback (most recent call last):
File "C:\Users\Edwin\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Edwin\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\Scripts\svcf.exe_main.py", line 7, in
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\click\core.py", line 1130, in call
return self.main(*args, **kwargs)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\click\core.py", line 1055, in main
rv = self.invoke(ctx)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\click\core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\click\core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\click\core.py", line 760, in invoke
return _callback(*args, **kwargs)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\so_vits_svc_fork_main.py", line 438, in vc
realtime(
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\so_vits_svc_fork\inference\main.py", line 208, in realtime
svc_model.infer(
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\so_vits_svc_fork\inference\core.py", line 228, in infer
audio = self.net_g.infer(
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\so_vits_svc_fork\modules\synthesizers.py", line 213, in infer
x = self.pre(c) * x_mask + self.emb_uv(uv.long()).transpose(1, 2)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\torch\nn\modules\conv.py", line 313, in forward return self._conv_forward(input, self.weight, self.bias)
File "E:\dev\so-vits-svc-fork\easy-installation\venv\lib\site-packages\torch\nn\modules\conv.py", line 309, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [192, 768, 5], expected input[1, 256, 86] to have 768 channels, but got 256 channels instead
Solved here
so-vits-svc produce model, can not in use with so-vits-svc-fork #836
Still get the warning :
UserWarning: Shape mismatch: ['emb_g.weight: torch.Size([1, 256]) -> torch.Size([1, 768])', 'pre.weight: torch.Size([192, 256, 5]) -> torch.Size([192, 768, 5])', 'dec.cond.weight: torch.Size([512, 256, 1]) -> torch.Size([512, 768, 1])', 'enc_q.enc.cond_layer.weight_v: torch.Size([6144, 256, 1]) -> torch.Size([6144, 768, 1])', 'flow.flows.0.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'flow.flows.2.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'flow.flows.4.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'flow.flows.6.enc.cond_layer.weight_v: torch.Size([1536, 256, 1]) -> torch.Size([1536, 768, 1])', 'f0_decoder.cond.weight: torch.Size([192, 256, 1]) -> torch.Size([192, 768, 1])']
这是来自QQ邮箱的假期自动回复邮件。
您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。