FunASR
FunASR copied to clipboard
CT-Transformer标点模型的训练代码没有样例呢,求助!!!
❓ Questions and Help
Before asking:
- search the issues.
- search the docs.
What is your question?
Code
What have you tried?
What's your environment?
- OS (e.g., Linux):
- FunASR Version (e.g., 1.0.0):
- ModelScope Version (e.g., 1.11.0):
- PyTorch Version (e.g., 2.0.0):
- How you installed funasr (
pip, source): - Python version:
- GPU (e.g., V100M32)
- CUDA/cuDNN version (e.g., cuda11.7):
- Docker version (e.g., funasr-runtime-sdk-cpu-0.4.1)
- Any other relevant information:
On going
“”CT-Transformer标点-中英文-通用-large“” 这个CT-tranformer 标点的模型,并不是采用的看未来L长度呀,如果输入text_lengths按每条长度,里面代码就只乘了这个做成的mask,就是看全局的san_M, 有没有跟论文一样的配置代码模型呢? 还有该模型前向20切分输入,是否是最佳值呢
我也想对标点模型进行微调,请问您实现了吗?
实现了一般,效果不行,应该有问题,等他开源吧