MagicSource
MagicSource
@csukuangfj I have using the pt and the model inference result right in pytorch, at least indicates that, the model and weights has not porblem. but ncnn was wrong.
@csukuangfj then which part am missing? I can not produce with icefall aswell, exportation was wrong?>
Just realize I should expand batch dim for single file. But I got a new error after fix this: ``` assert x.size(0) == lengths.max().item() AssertionError ``` why this assert happen?
From the comment of Transducer function, the x_lens is (N,) which is (1,) here, since my batch size is 1. why got above error? Logged out `torch.Size([119, 1, 512]) tensor([-1])`...
@csukuangfj do u know why I got above error?
@csukuangfj I put all code here: https://github.com/jinfagang/aural/blob/master/demo_file.py I am not get used to Kaldi like file organize so I reconstructed a little bit. The model should be same, the weight...
@csukuangfj I copied code from ncnn sherpa demo, is that because of ncnn didn't care about batch so you squeeze all of them? I noticed that this should also be...
@csukuangfj Hi, I just changed into: ``` for t in range(T): encoder_out_t = encoder_out[:,t,:].unsqueeze(1) print_shape(encoder_out_t) joiner_out = model.run_joiner(encoder_out_t, decoder_out) # print(joiner_out.shape) # [500] y = joiner_out.argmax(dim=0).tolist() if y != blank_id:...
Since decoderout is `[decoder_out]: torch.Size([1, 2, 512]) cpu torch.float32` decoder.ndim should equal to encoder.ndim, so why is should in [2, 4]?
@csukuangfj Hi, I now can forward decoder, but this for loop seems not work: ``` def greedy_search(model, encoder_out: torch.Tensor): print_shape(encoder_out) assert encoder_out.ndim == 3 T = encoder_out.size(1) context_size = 2...