保姆级别教程(持续更新各类社区/非官方教程----
(作者借楼编辑ing 社区视频教程: 奶糖 https://www.bilibili.com/video/BV1dq4y137pH
再分享 https://github.com/babysor/MockingBird/issues/122
播放都是杂音。。。。。。。。。。。。。
C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main>python synthesizer_train.py mandarin E:\datat\rain_set\train
Arguments:
run_id: mandarin
syn_dir: E:\datat\rain_set\train
models_dir: synthesizer/saved_models/
save_every: 1000
backup_every: 25000
force_restart: False
hparams:
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt Loading training data from: E:\datat\rain_set\train\train.txt Using model: Tacotron Using device: cpu
Initialising Tacotron Model...
Trainable Parameters: 30.872M
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt Tacotron weights loaded from step 0 Using inputs from: E:\datat\rain_set\train\train.txt E:\datat\rain_set\train\mels E:\datat\rain_set\train\embeds Traceback (most recent call last): File "synthesizer_train.py", line 35, in <module> train(**vars(args)) File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\train.py", line 111, in train dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams) File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\synthesizer_dataset.py", line 12, in init with metadata_fpath.open("r", encoding="utf-8") as metadata_file: File "C:\ProgramData\Anaconda3\lib\pathlib.py", line 1221, in open return io.open(self, mode, buffering, encoding, errors, newline, File "C:\ProgramData\Anaconda3\lib\pathlib.py", line 1077, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: 'E:\datat\rain_set\train\train.txt'
C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main>python demo_toolbox.py -d E:\data\train_set\train 2021-08-19 17:56:17.809226: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-08-19 17:56:17.809396: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Arguments: datasets_root: E:\data\train_set\train enc_models_dir: encoder\saved_models syn_models_dir: synthesizer\saved_models voc_models_dir: vocoder\saved_models cpu: False seed: None no_mp3_support: False
Warning: you do not have any of the recognized datasets in E:\data\train_set\train. The recognized datasets are: LibriSpeech/dev-clean LibriSpeech/dev-other LibriSpeech/test-clean LibriSpeech/test-other LibriSpeech/train-clean-100 LibriSpeech/train-clean-360 LibriSpeech/train-other-500 LibriTTS/dev-clean LibriTTS/dev-other LibriTTS/test-clean LibriTTS/test-other LibriTTS/train-clean-100 LibriTTS/train-clean-360 LibriTTS/train-other-500 LJSpeech-1.1 VoxCeleb1/wav VoxCeleb1/test_wav VoxCeleb2/dev/aac VoxCeleb2/test/aac VCTK-Corpus/wav48 aidatatang_200zh/corpus/dev aidatatang_200zh/corpus/test Feel free to add your own. You can still use the toolbox by recording samples yourself. Loaded encoder "pretrained.pt" trained to step 1564501 Synthesizer using device: cpu Trainable Parameters: 30.872M Traceback (most recent call last): File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\toolbox_init_.py", line 122, in <lambda> func = lambda: self.synthesize() or self.vocode() File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\toolbox_init_.py", line 229, in synthesize specs = self.synthesizer.synthesize_spectrograms(texts, embeds) File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\inference.py", line 86, in synthesize_spectrograms self.load() File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\inference.py", line 64, in load self._model.load(self.model_fpath) File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\models\tacotron.py", line 497, in load self.load_state_dict(checkpoint["model_state"]) File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1223, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for Tacotron: size mismatch for encoder.embedding.weight: copying a param with shape torch.Size([75, 512]) from checkpoint, the shape in current model is torch.Size([70, 512]).
| Generating 1/1
Done.
------------------ 原始邮件 ------------------ 发件人: "babysor/Realtime-Voice-Clone-Chinese" @.>; 发送时间: 2021年8月19日(星期四) 下午5:57 @.>; @.>;"State @.>; 主题: Re: [babysor/Realtime-Voice-Clone-Chinese] 来个保姆级别教程@@ (#20)
别害羞,快分享一下卡在哪里啦,我再优化优化
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
出一个详细的教程吧,大佬👍
---原始邮件--- 发件人: @.> 发送时间: 2021年8月19日(周四) 下午5:57 收件人: @.>; 抄送: @.>;"State @.>; 主题: Re: [babysor/Realtime-Voice-Clone-Chinese] 来个保姆级别教程@@ (#20)
别害羞,快分享一下卡在哪里啦,我再优化优化
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android.
这。。看起来你都没train起来synthesizer啊
同求,比如数据集在哪里下载
同求,比如数据集在哪里下载
#14 closed裡面有同樣的問題,有放下載連結
E:\data\aidatatang_200zh\aidatatang_200zh\corpus\train 数据集解压路径 这一步synthesizer_preprocess_audio.py有问题吗 C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main> python synthesizer_preprocess_audio.py E:\data\aidatatang_200zh\aidatatang_200zh Arguments: datasets_root: E:\data\aidatatang_200zh\aidatatang_200zh out_dir: E:\data\aidatatang_200zh\aidatatang_200zh\SV2TTS\synthesizer n_processes: None skip_existing: False hparams: no_alignments: False dataset: aidatatang_200zh
Using data from:
E:\data\aidatatang_200zh\aidatatang_200zh\aidatatang_200zh\corpus\train
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 63, in
E:\data\aidatatang_200zh\aidatatang_200zh\corpus\train 数据集解压路径 这一步synthesizer_preprocess_audio.py有问题吗 C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main> python synthesizer_preprocess_audio.py E:\data\aidatatang_200zh\aidatatang_200zh Arguments: datasets_root: E:\data\aidatatang_200zh\aidatatang_200zh out_dir: E:\data\aidatatang_200zh\aidatatang_200zh\SV2TTS\synthesizer n_processes: None skip_existing: False hparams: no_alignments: False dataset: aidatatang_200zh
Using data from: E:\data\aidatatang_200zh\aidatatang_200zh\aidatatang_200zh\corpus\train Traceback (most recent call last): File "synthesizer_preprocess_audio.py", line 63, in preprocess_dataset(**vars(args)) File "C:\Users\Administrator\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main\synthesizer\preprocess.py", line 32, in preprocess_dataset assert all(input_dir.exists() for input_dir in input_dirs) AssertionError
python synthesizer_preprocess_audio.py E:\data\aidatatang_200zh 不用多一层
python synthesizer_preprocess_audio.py E:\data\aidatatang_200zh 不用多一层 解决了
python synthesizer_preprocess_audio.py E:\data\aidatatang_200zh 不用多一层 @解决了大佬
这步也太慢了。。。 C:\Users\lxd\Desktop\Realtime-Voice-Clone-Chinese-main\Realtime-Voice-Clone-Chinese-main>python synthesizer_train.py mandarin D:\data\aidatatang_200zh\SV2TTS\synthesizer Arguments: run_id: mandarin syn_dir: D:\data\aidatatang_200zh\SV2TTS\synthesizer models_dir: synthesizer/saved_models/ save_every: 1000 backup_every: 25000 force_restart: False hparams:
Checkpoint path: synthesizer\saved_models\mandarin\mandarin.pt Loading training data from: D:\data\aidatatang_200zh\SV2TTS\synthesizer\train.txt Using model: Tacotron Using device: cpu
Initialising Tacotron Model...
Trainable Parameters: 30.872M
Loading weights at synthesizer\saved_models\mandarin\mandarin.pt Tacotron weights loaded from step 0 Using inputs from: D:\data\aidatatang_200zh\SV2TTS\synthesizer\train.txt D:\data\aidatatang_200zh\SV2TTS\synthesizer\mels D:\data\aidatatang_200zh\SV2TTS\synthesizer\embeds Found 122482 samples +----------------+------------+---------------+------------------+ | Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) | +----------------+------------+---------------+------------------+ | 20k Steps | 12 | 0.001 | 2 | +----------------+------------+---------------+------------------+
{| Epoch: 1/2 (500/10207) | Loss: 0.9025 | 0.065 steps/s | Step: 0k | }Input at step 500: wo3 yao4 gei3 wang2 ming2 da3 dian4 hua4~__________________________________________________________ {| Epoch: 1/2 (1000/10207) | Loss: 0.8266 | 0.071 steps/s | Step: 1k | }Input at step 1000: na4 me wo3 jiu4 chong2 xin1 ren4 shi2 ni3~______________________________________________________________ {| Epoch: 1/2 (1500/10207) | Loss: 0.7602 | 0.074 steps/s | Step: 1k | }Input at step 1500: mei3 tian1 dou1 na4 me wan3 shui4 jiao4~___________________________________ {| Epoch: 1/2 (2000/10207) | Loss: 0.7415 | 0.075 steps/s | Step: 2k | }Input at step 2000: da3 dian4 hua4 gei3 deng4 han4 ling2~_________________________________________________________________________ {| Epoch: 1/2 (2500/10207) | Loss: 0.6921 | 0.068 steps/s | Step: 2k | }Input at step 2500: zhen1 xiang4 yong3 yuan3 zhi3 you3 yi2 ge4~___________________________________ {| Epoch: 1/2 (3000/10207) | Loss: 0.6741 | 0.072 steps/s | Step: 3k | }Input at step 3000: xia4 men2 wai4 guo2 yu3 xue2 xiao4 chu1 er4 nian2 ji2 chen2 xiao3 qi2 jia1 de zhu4 zhi3~ {| Epoch: 1/2 (3500/10207) | Loss: 0.6499 | 0.070 steps/s | Step: 3k | }Input at step 3500: ru2 guo3 wo3 he2 ni3 zai4 yi4 qi3~_______________________________________________________ {| Epoch: 1/2 (4000/10207) | Loss: 0.6679 | 0.073 steps/s | Step: 4k | }Input at step 4000: fu4 jin4 de ping2 an1 yin2 hang2~_________________________________ {| Epoch: 1/2 (4500/10207) | Loss: 0.6349 | 0.069 steps/s | Step: 4k | }Input at step 4500: ming2 zi4 shi4 hui3 guo4 cheng2 nuo4 shu1~_____________________________________________________ {| Epoch: 1/2 (5000/10207) | Loss: 0.6392 | 0.073 steps/s | Step: 5k | }Input at step 5000: wo3 shen2 me shi2 hou4 cai2 neng2 chong1 man3 dian4~_______________________ {| Epoch: 1/2 (5500/10207) | Loss: 0.6293 | 0.073 steps/s | Step: 5k | }Input at step 5500: wo3 da3 ni3 hao3 bu4 hao3 ma ge2 shi4 chong2 fu4~___________ {| Epoch: 1/2 (6000/10207) | Loss: 0.6715 | 0.077 steps/s | Step: 6k | }Input at step 6000: ci3 ji4 hao3 wu2 liao2 da3 yi1 dian4 ying3 ming2~___________________________________ {| Epoch: 1/2 (6500/10207) | Loss: 0.6446 | 0.075 steps/s | Step: 6k | }Input at step 6500: wo3 gei3 ni3 fa1 de ni3 shou1 dao4 le ma~___________________________________________________________ {| Epoch: 1/2 (7000/10207) | Loss: 0.6022 | 0.068 steps/s | Step: 7k | }Input at step 7000: ning4 que1 wu2 lan4 zhi3 wei4 yi3 hou4 de du2 yi1 wu2 er4~________________________ {| Epoch: 1/2 (7500/10207) | Loss: 0.6178 | 0.067 steps/s | Step: 7k | }Input at step 7500: mei2 you3 wang3 luo4 ni3 hai2 hui4 liao2 tian1 ma~______________________ {| Epoch: 1/2 (8000/10207) | Loss: 0.6041 | 0.068 steps/s | Step: 8k | }Input at step 8000: wo3 bu4 fa1 le wo3 yao4 shui4 jiao4 le~____________________________________________________________________________________________ {| Epoch: 1/2 (8500/10207) | Loss: 0.6078 | 0.072 steps/s | Step: 8k | }Input at step 8500: ni3 cai1 lai2 cai1 qu4 ye3 cai1 bu4 ming2 bai2~____________________________________________________ {| Epoch: 1/2 (9000/10207) | Loss: 0.6055 | 0.072 steps/s | Step: 9k | }Input at step 9000: ni3 wen4 le wo3 tou2 dou1 da4 le~_______________________________________ {| Epoch: 1/2 (9500/10207) | Loss: 0.5816 | 0.069 steps/s | Step: 9k | }Input at step 9500: xia4 ban1 mei2 you3 mei2 chu1 qu4 guang4~_______________________________________ {| Epoch: 1/2 (10000/10207) | Loss: 0.5664 | 0.068 steps/s | Step: 10k | }Input at step 10000: ni3 jin1 tian1 bu2 shi4 bu4 shang4 ban1 ma~__________________________________________________ {| Epoch: 1/2 (10207/10207) | Loss: 0.5879 | 0.071 steps/s | Step: 10k | } {| Epoch: 2/2 (293/10207) | Loss: 0.5840 | 0.070 steps/s | Step: 10k | }Input at step 10500: ai4 qing2 xiao3 shuo1 ma2 que4 gao3 ding4 hua1 mei3 nan2~_________________________________ {| Epoch: 2/2 (322/10207) | Loss: 0.5892 | 0.070 steps/s | Step: 10k | }
@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
...GPU,怎么配置显卡全局配置了呀。。
Arguments: datasets_root: D:\Data\aidatatang_200zh out_dir: D:\Data\aidatatang_200zh\SV2TTS\synthesizer n_processes: None skip_existing: False hparams: no_alignments: False dataset: aidatatang_200zh
Using data from:
D:\Data\aidatatang_200zh\aidatatang_200zh\corpus\train
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 63, in
路径好像没啥问题啊
trian里是长这样的嘛
有群吗 一起交流下怎么跑
7天有效
7天有效
二维码过期了
7天有效
二维码过期了

群二维码过期了,求更
二维码过期了,求更

群二维码过期了,求更
见上
群二维码过期了,求更
见上
谢谢你
二维码失效了呜呜呜

@chloe5685

招募志愿者一名, multilang writer preferred. 包教会和调优,整理后输出教程到社区。
群200人了 求群成员邀请进群。。我的wx是Sahvyhsu
用telegram更好
用telegram更好
辛苦创建一个?
For non-trainer: https://github.com/babysor/MockingBird/wiki For trainer: https://vaj2fgg8yn.feishu.cn/docs/doccn7kAbr3SJz0KM0SIDJ0Xnhd#
D:\Downloads\MockingBird-main>python demo_toolbox.py -d "D:\Downloads\aidatatang_200zh" 2021-10-01 14:32:20.420728: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found 2021-10-01 14:32:20.420998: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Arguments: datasets_root: D:\Downloads\aidatatang_200zh enc_models_dir: encoder\saved_models syn_models_dir: synthesizer\saved_models voc_models_dir: vocoder\saved_models cpu: False seed: None no_mp3_support: False
Traceback (most recent call last):
File "D:\Downloads\MockingBird-main\demo_toolbox.py", line 43, in
求群成员邀请入群,wx是Sinohoney0002
求群成员邀请进群。。我的wx是myggzhsdd1
python demo_toolbox.py -d E:\Fun\datasets Arguments: datasets_root: E:\Fun\datasets enc_models_dir: encoder\saved_models syn_models_dir: synthesizer\saved_models voc_models_dir: vocoder\saved_models cpu: False seed: None no_mp3_support: False
Warning: you do not have any of the recognized datasets in E:\Fun\datasets. The recognized datasets are: LibriSpeech/dev-clean LibriSpeech/dev-other LibriSpeech/test-clean LibriSpeech/test-other LibriSpeech/train-clean-100 LibriSpeech/train-clean-360 LibriSpeech/train-other-500 LibriTTS/dev-clean LibriTTS/dev-other LibriTTS/test-clean LibriTTS/test-other LibriTTS/train-clean-100 LibriTTS/train-clean-360 LibriTTS/train-other-500 LJSpeech-1.1 VoxCeleb1/wav VoxCeleb1/test_wav VoxCeleb2/dev/aac VoxCeleb2/test/aac VCTK-Corpus/wav48 aidatatang_200zh/corpus/dev aidatatang_200zh/corpus/test aishell3/test/wav magicdata/train Feel free to add your own. You can still use the toolbox by recording samples yourself. 这里出了什么问题,进去之后数据那两行是灰色的。求教QAQ
@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
您好,我根据您的文档修改了Batch size调整到36,现在显存已经使用80%,但是显卡使用率还是只有12%,我还需要调整哪个参数呢?
求群成员邀请入群,wx是luo_dan_lwts
求群成员邀请入群,wx是Mr-Sandman___
建了二群 方便交流
田渣渣 @.***> 于 2021年10月9日周六 下午1:23写道:
求群成员邀请入群,wx是Mr-Sandman___
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/babysor/MockingBird/issues/20#issuecomment-939230399, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS6A2HF5F57YOZ4N2754TLDUF7GUVANCNFSM5CN2A35Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
您好,我根据您的文档修改了Batch size调整到36,现在显存已经使用80%,但是显卡使用率还是只有12%,我还需要调整哪个参数呢?
我3090,设置为48,0.86 setps/s,达不到1.3-2
@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
您好,我根据您的文档修改了Batch size调整到36,现在显存已经使用80%,但是显卡使用率还是只有12%,我还需要调整哪个参数呢?
我3090,设置为48,0.86 setps/s,达不到1.3-2
不同batch size,step速度没有可比较性。
@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
您好,我根据您的文档修改了Batch size调整到36,现在显存已经使用80%,但是显卡使用率还是只有12%,我还需要调整哪个参数呢?
我3090,设置为48,0.86 setps/s,达不到1.3-2
不同batch size,step速度没有可比较性。
和数据集大小也有关吧
@zhuzaileiting 你这是用cpu训练的,GPU速度大概在1.3-2 steps/s
您好,我根据您的文档修改了Batch size调整到36,现在显存已经使用80%,但是显卡使用率还是只有12%,我还需要调整哪个参数呢?
我3090,设置为48,0.86 setps/s,达不到1.3-2
不同batch size,step速度没有可比较性。
和数据集大小也有关吧
是的
能再发一下群吗 过期了耶
二维码失效了

效果提升小技巧合集1 - vega的文章 https://zhuanlan.zhihu.com/p/425692267
别害羞,快分享一下卡在哪里啦,我再优化优化
作者你好,有没有考虑出一个docker镜像,所有搭建问题都迎刃而解。。。
能再发一下群吗 过期了耶
同求
别害羞,快分享一下卡在哪里啦,我再优化优化
作者你好,有没有考虑出一个docker镜像,所有搭建问题都迎刃而解。。。
需要考虑显卡部分,现在确实有支持cuda的docker,你能帮我research一下吗?

别害羞,快分享一下卡在哪里啦,我再优化优化
作者你好,有没有考虑出一个docker镜像,所有搭建问题都迎刃而解。。。
需要考虑显卡部分,现在确实有支持cuda的docker,你能帮我research一下吗?
只用于预测的容器镜像我已经构建好,内置预训练模型,本次测试跑过了。
docker run -p 8080:8080 -it jiada/mocking_bird
别害羞,快分享一下卡在哪里啦,我再优化优化
作者你好,有没有考虑出一个docker镜像,所有搭建问题都迎刃而解。。。
需要考虑显卡部分,现在确实有支持cuda的docker,你能帮我research一下吗?
只用于预测的容器镜像我已经构建好,内置预训练模型,本次测试跑过了。
docker run -p 8080:8080 -it jiada/mocking_bird
有测试的相关信息吗?如果方便的话发起pr?
别害羞,快分享一下卡在哪里啦,我再优化优化
作者你好,有没有考虑出一个docker镜像,所有搭建问题都迎刃而解。。。
需要考虑显卡部分,现在确实有支持cuda的docker,你能帮我research一下吗?
只用于预测的容器镜像我已经构建好,内置预训练模型,本次测试跑过了。
docker run -p 8080:8080 -it jiada/mocking_bird
我试用了一下,有一些问题
- 基于浏览器的限制,不是https没法获取麦克风权限,本人不擅长python,只好另辟蹊径在不改动镜像的情况下,在宿主机上搭了个nginx随便起了个端口号666 配好ssl证书来代理8080
- 第二个问题那应该真的是一个问题,我进到容器里执行demo_toolbox报错,我有个疑问,是不是这个demo_tool只针对windows啊(我是mac)。报错如所示
# pwd
/workspace
# python demo_toolbox.py
Traceback (most recent call last):
File "demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "/workspace/toolbox/__init__.py", line 1, in <module>
from toolbox.ui import UI
File "/workspace/toolbox/ui.py", line 11, in <module>
import sounddevice as sd
File "/opt/conda/lib/python3.7/site-packages/sounddevice.py", line 71, in <module>
raise OSError('PortAudio library not found')
OSError: PortAudio library not found
别害羞,快分享一下位置啦,我再优化优化
作者你好,有没有考虑出一个码头工人假象,所有搭建的问题都迎刃而解。。。
需要考虑显瘦部分,现在确实有支持 cuda 的码头工人,你能帮我研究一下吗?
仅用于预测的假设我已经构建好,构建了预训练模型,本次测试跑过了。
docker run -p 8080:8080 -it jiada/mocking_bird我试用了一下,有一些问题
- 基于浏览器的限制,不是https无法获取获取权限,只有自己不擅长蟒蛇,好另一种方法可以在不伪装镜像的下,在智能机上机上了几个nginx随便找个端口号666情况配好ssl来代理8080
- 第二个问题那应该是一个问题,进入容器里执行demo_toolbox,我这个问题,是不是demo_tool只针对windows啊(我是mac)。报错如所示
#密码 /工作区 # python demo_toolbox.py 回溯(最近一次调用最后一次): 文件“ demo_toolbox.py ”,第2行,在 <模块> 从工具箱导入工具箱 文件“ /workspace/toolbox/__init__.py ”,第1行,在 <模块> 从 toolbox.ui 导入 UI 文件“ /workspace/toolbox/ui.py ”,第11行,在 <模块> 将声音设备导入为 sd 文件“ /opt/conda/lib/python3.7/site-packages/sounddevice.py ”,第 71 行,在 < module > 中 引发 OSError( ' PortAudio library not found ' ) OSError:找不到 PortAudio 库
我也是这个问题 怎么解决啊
C:\mockingbird\MockingBird-main>python demo_toolbox.py
Traceback (most recent call last):
File "demo_toolbox.py", line 2, in
- [ ]
我想了下 这个docker里运行GUI的软件 也许还得参考
https://www.cloudsavvyit.com/10520/how-to-run-gui-applications-in-a-docker-container/
求更新群二维码
求更新群二维码
求更新群二维码
等一个新的群二维码

还有入群码吗
ERKE PLAY @.***> 于 2021年12月6日周一 下午2:45写道:
还有入群码吗
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/babysor/MockingBird/issues/20#issuecomment-986484977, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS6A2HGNFSX4JFJUGARV5J3UPRLXTANCNFSM5CN2A35Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
做了一个视频 https://www.bilibili.com/video/BV15Z4y197Ra 还算详细,应该只漏了ffmpeg的安装
您好,您发给我的邮件已收到,我会尽快查看及回复哦~
你好 能重新发一下入群码吗 @babysor
求大佬更新二维码 @babysor
求大佬更新二维码
不是特别清楚什么文件放什么位置
大佬更新一下二维码
dadayefeng @.***> 于 2021年12月13日周一 下午9:33写道:
大佬更新一下二维码
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/babysor/MockingBird/issues/20#issuecomment-992480848, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS6A2HCZLPN5QBZIMOQAOHTUQXY2JANCNFSM5CN2A35Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
大佬能更新下二维码吗,现在总是崩溃找不到问题
二维码没更新啊
做了一个视频 https://www.bilibili.com/video/BV15Z4y197Ra 还算详细,应该只漏了ffmpeg的安装
方便指定放到readme吗?

大佬求新群号,谢啦😘 @babysor
大佬求新群号,谢啦😘 @babysor
求新群码或者QQ群号 或者电报号

谢谢!
牛文龙 博源金融
联系我
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年1月9日(星期天) 晚上10:20 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [babysor/MockingBird] 保姆级别教程(持续更新各类社区/非官方教程---- (#20)
— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you commented.Message ID: @.***>
拜托拉,更新一下群二维码
二维码更新一下撒

更新一下二维码谢谢
D:\M\MockingBird-main>python pre.py D:\数据集 -d aidatatang_200zh -n 7
Using data from:
D:\数据集\aidatatang_200zh\corpus\train
Traceback (most recent call last):
File "D:\M\MockingBird-main\pre.py", line 74, in
更新一下二维码谢谢
更新一下二维码谢谢
+1
在输入音频后出现了这个报错……
Feel free to add your own. You can still use the toolbox by recording samples yourself.
Traceback (most recent call last):
File "D:\mokingbird\MockingBird-main\MockingBird-main\toolbox_init_.py", line 103, in
在输入音频后出现了这个问题……
随意添加您自己的。您仍然可以通过自己录制样本来使用工具箱。 Traceback(最近一次调用最后): 文件“D:\mokingbird\MockingBird-main\MockingBird-main\toolbox_init _.py ”,第 103 行,在 func = lambda: self.load_from_browser(self.ui.browse_file()) 文件“D:\mokingbird\MockingBird-main\MockingBird-main\toolbox_init _.py ”,第 170 行,在 load_from_browser wav = Synthesizer.load_preprocess_wav(fpath) 文件“D:\mokingbird\MockingBird-main\MockingBird-main\ synthesizer\inference.py",第 146 行,在 load_preprocess_wav wav = librosa.load(str(fpath), hparams.sample_rate)[0] TypeError: load() 采用 1 个位置参数,但给出了 2 个
已解决 在命令的命令输入pip install librosa==0.8.1
请问可以更新一下二维码吗,谢谢
各位大佬!!帮我看看这是什么错误啊,,我已经把模板放入相应的文件夹里了可是仍然不行额
D:\迅雷下载\MockingBird-main>python demo_toolbox.py Arguments: datasets_root: None enc_models_dir: encoder\saved_models syn_models_dir: synthesizer\saved_models voc_models_dir: vocoder\saved_models cpu: False seed: None no_mp3_support: False
Warning: you did not pass a root directory for datasets as argument. The recognized datasets are: LibriSpeech/dev-clean LibriSpeech/dev-other LibriSpeech/test-clean LibriSpeech/test-other LibriSpeech/train-clean-100 LibriSpeech/train-clean-360 LibriSpeech/train-other-500 LibriTTS/dev-clean LibriTTS/dev-other LibriTTS/test-clean LibriTTS/test-other LibriTTS/train-clean-100 LibriTTS/train-clean-360 LibriTTS/train-other-500 LJSpeech-1.1 VoxCeleb1/wav VoxCeleb1/test_wav VoxCeleb2/dev/aac VoxCeleb2/test/aac VCTK-Corpus/wav48 aidatatang_200zh/corpus/dev aidatatang_200zh/corpus/test aishell3/test/wav magicdata/train Feel free to add your own. You can still use the toolbox by recording samples yourself.
二维码需要更新
求个新的群二维码
大佬,我requirements安装没报错,但是运行程序时会报错:
Traceback (most recent call last):
File "E:\MockingBird\demo_toolbox.py", line 2, in
pip install torchcomplex
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年3月7日(星期一) 凌晨1:21 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [babysor/MockingBird] 保姆级别教程(持续更新各类社区/非官方教程---- (#20)
大佬,我requirements安装没报错,但是运行程序时会报错: Traceback (most recent call last): File "E:\MockingBird\demo_toolbox.py", line 2, in from toolbox import Toolbox File "E:\MockingBird\toolbox_init_.py", line 6, in import ppg_extractor as extractor File "E:\MockingBird\ppg_extractor_init_.py", line 6, in from .frontend import DefaultFrontend File "E:\MockingBird\ppg_extractor\frontend.py", line 5, in from torch_complex.tensor import ComplexTensor ModuleNotFoundError: No module named 'torch_complex'
— Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you commented.Message ID: @.***>
大佬们 问个问题 我自己的数据太小了 中途更换别的数据集进行训练 但是出现了这样的错误代码
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
Traceback (most recent call last):
File "E:\数据集制作\MockingBird-main\synthesizer_train.py", line 37, in