zhaoyukoon
zhaoyukoon
The motivation of this issue is to remove the dependency on kaldi and openFST. There should be no difference between WFST and trie approaches under WER or latency. [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech/blob/master/decoders/swig/ctc_beam_search_decoder.cpp) has...
> the way to search LM score need implement by yourself Actually only 100 lines of code is need to implement lm score calculation. I agree that WFST is more...
> @zhaoyukoon Hi,this code based on beam search or prefix beam search? beam search, not prefix beam search. Actually, it is same as to ctc wfst beam search, with except...
I met with similar problem, I fixed it by installing latest pytorch via pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121 Besides, use `MAX_JOBS=4` would reduce memory usage.