Piotr Żelasko
Piotr Żelasko
Thanks, that seems to be working! ``` $ python3 -m k2.version Collecting environment information... k2 version: 1.11 Build type: Release Git SHA1: Git date: Cuda used to build k2: cuDNN...
It also works on a different system with an even older compiler / stdlib (g++ (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)), that is great. Thanks a lot!
Just for clarification: does "full-chunk decoding" mean "offline decoding"?
OK I can check this out
In the LF-MMI training, in epoch 2 I see an error: ``` Traceback (most recent call last): File "mmi_bigram_train.py", line 428, in main() File "mmi_bigram_train.py", line 386, in main global_batch_idx_valid=global_batch_idx_valid)...
Anyway, from the first 2 epochs, it seems like scaling the denominator down does not help. I'm adding tensorboard screenshot - it's weirdly rendered likely because each subsequent epoch resets...
Thanks. So that's what the "relative" option is for ;) updated screens:  Re CUDNN: you're right, I was probably too quick to assume K2 error. I'll try to figure...
Unfortunately, not using conda is a luxury that some of us (who don't have sudo privileges/can't mess with their compute infrastructure) cannot afford... Maybe it's possible to do a hybrid...
It’s not only about the Python version, it also allows you to install native dependencies (including CUDA, MKL, CUDNN, even gcc) in versions you want in a very simple way...
Some results: den_scale = 0.8 ``` 2021-01-08 12:22:01,328 INFO [mmi_bigram_decode.py:297] %WER 12.41% [6525 / 52576, 872 ins, 732 del, 4921 sub ] ``` den_scale = 0.9 ``` 2021-01-08 17:26:31,018 INFO...