aurae
aurae
可以先用https://github.com/baidu-research/warp-ctc 的源码安装warp-ctc,成功后,返回到本项目的源码目录,cd ../pytorch_binding,修改setup.py中的warp_ctc_path = "../build",改为baidu的build目录,然后再执行python setup.py install,但是libwarpctc.dylib这个文件没有找到,不过Example代码执行没问题。 有没有其他大神把这个问题解决了的?
【队名】:还真是这么回事 【序号】:2 【论文】:Neural Architecture Design for GPU-Efficient Networks 【状态】:报名 【repo链接】:https://github.com/bigcash/GENets-Paddle
【队名】:还真是那么回事 【序号】:2 【论文】:Neural Architecture Design for GPU-Efficient Networks 【状态】:提交 【repo链接】:https://github.com/bigcash/GENets-Paddle
@lawlict from dataloader.py you can get the batchsize of testloader have only 1, so evalluation.py has a "break" may be ok
我也是报了这个错误?怎么破?
my suggestion: i think you should freeze encoder, finetune ctc and decoder.
you just need a bigger dataset... to expand word units.txt, you can try freeze all weight, but update the weight of your new units only.
> > you just need a bigger dataset... to expand word units.txt, you can try freeze all weight, but update the weight of your new units only. > > Hi,...
> > the use_amp option > > > > > > I am encountering the same error with the librispeech/s0 recipe (but using a custom dataset). I have tried filtering...
> 我这边,使用 v2,v3转到 faster-whisper 的模型,好像也没有 vad 成功。 > > Name: whisperx Version: 3.1.2 > > Name: faster-whisper Version: 1.0.1 > > 测试用视频:https://www.youtube.com/watch?v=we8vNy6DYMI > > v2 偶尔还会出现乱码 v3 的话,就算设置了 vad 也一样是30s...