Tanel Alumäe
Tanel Alumäe
No, there are no plans to support CUDA. I just cannot see it as a very practical approach (one would need one GPU per worker, which would be expensive, and...
I don't think it's possible at the moment. In fact, I've struggled with this problem too. I think it one might need to make largish changes to the code to...
Are you using the "big-lm-const-arpa" property? The lattice generated using the smaller LM is rescored when the decoding finishes using the "big LM", and that's sometimes changes the result noticeably.
Sorry, I'm not able to support Cygwin. I have no idea if it's possible to get it to work.
I am not able to reproduce this. From what I see, everything is correct: when client disconnects, the worker waits until it's done decoding, and then reconnects to the server....
Are you using the latest code from trunk, and have you made any changes? I'm asking this because it seems that you have changed the counter to 80 (because you...
I was able to finally reproduce this after upgrading my ws4py package, working on the fix now.
This is caused by what I claim is a bug in ws4py 3.5.0. The problem is that ws4py finishes the websocket handling thread before the websocket has actually done its...
It's a bit embarrassing but I'm afraid it's not compatible with Python 3. It's definitely on my TODO list as I also want to move to Python 3, but I...