Yubei

Results 32 comments of Yubei

I trace back to [the place where inputs pass in](https://github.com/tensorflow/lingvo/blob/eb50d8dca0c35007df1d57b1a2151a134a660d7a/lingvo/tasks/asr/encoder.py#L324) and [batch.src_inputs](https://github.com/tensorflow/lingvo/blob/eb50d8dca0c35007df1d57b1a2151a134a660d7a/lingvo/tasks/asr/encoder.py#L319). However, it also occurs: ``` batch.src_inputs: Tensor("ExpandDims_1:0", dtype=float32, device=/job:local/replica:0/task:0/device:CPU:0) where inputs pass into specaugment: Tensor("ExpandDims_1:0", dtype=float32, device=/job:local/replica:0/task:0/device:CPU:0) ```

Sorry, I don't quite understand what you said. Could you elaborate on this? Thank you~

I use this: ``` line312 # _, series_length, num_freq, _ = py_utils.GetShape(inputs) line313 series_length = inputs[1] line314 num_freq = inputs[2] line315 augmented_inputs = self._AugmentationNetwork(series_length, num_freq, line316 inputs, paddings) ``` However,...

您好,我是肖雨蓓,您的邮件我已经收到啦,看到来信后我会尽快给您回复的,谢谢!

I have tested 8-beam beam search and it has no improvement. I use librispeech train datasets' transcripts to train LM. Could you tell me what is the n-best rescoring you...

@by2101 Is log_pplx in RNNLM PPL? The following two pictures are "log_pplx" and "log_pplx_per_word". ![image](https://user-images.githubusercontent.com/21981293/61936516-7f3f9b00-afbf-11e9-9579-bc7177ae9e80.png) ![image](https://user-images.githubusercontent.com/21981293/61936539-8a92c680-afbf-11e9-80ba-0f13955cde2d.png) But my loss curve is strange. ![image](https://user-images.githubusercontent.com/21981293/61936563-95e5f200-afbf-11e9-950a-c1646a4b7c17.png)

It is really big, but looking at the curve I feel that the language model has converged. Has your training language model converge? What is your PPL result finally?

I used "xent_output.log_probs" to compute. It is already [log_softmax](https://github.com/tensorflow/lingvo/blob/af2e3468aa42d17d69d4019bce0f06a7c5d7fbf2/lingvo/core/layers.py#L2623). > I think it should be log_softmax rather than log logits.

Can your 3 gram or other lm fusion improve ASR?

Hahaha, yes. I think so. I would like to ask, how much better is your 3 gram PPL?