Binbin Zhang
Binbin Zhang
what's the error message?
any update now?
Any result about the accuracy?
> any WER comparison about the AC and WFST?
how many context phrases in your testing?
2.44? our best result shown in README is 2.66.
We just fixed a bug. please see https://github.com/wenet-e2e/wenet/pull/847. BPE is not used to tokenize the anotation due to the bug.
fixed. please see https://github.com/wenet-e2e/wenet/pull/848.
language_type is just for postprocessing of blank. For European language, you should set it to 1.
We are not sure. I think you can just ignore the warning and continue the training. The final WER should be comparable to the old code.