returnn
returnn copied to clipboard
Online Decoding
Hi, I have trained a model with unidirectional lstm and local attention.
How can I use this model for online decoding? Where should I make changes in the existing code?
I can see web_server
and run_single
in TFEngine.py
but unable to make sense of how to achieve online decoding.
Some guidance about how to achieve it will be really helpful.
Sorry, I forgot to answer here.
We currently do not have a setup for this published, so if you want to have that right now, it requires some own work. You need to change your network (the encoder) in a way that it can work with streaming input, and you need to implement some way to feed in the data there somehow.
We might later add something like this to RETURNN, to make this more simple. But there is nothing concrete planned yet, and no timeline.