Rajarshee Mitra
Rajarshee Mitra
Hi, In my case, attention distribution, the indices at the encoder side and the vocab distribution are all tensors. Their shapes are: ```attn_dist``` -> ```[batch size, num_decoder_steps, num_encoder_steps]``` contains the...
Hi, I installed onnx into my conda environment using: ```conda install -c conda-forge onnx``` This installs the following packages: ``` libprotobuf conda-forge/linux-64::libprotobuf-3.10.1-h8b12597_0 onnx conda-forge/linux-64::onnx-1.6.0-py36he1b5a44_0 protobuf conda-forge/linux-64::protobuf-3.10.1-py36he1b5a44_0 ``` However, following the...
After I pass an explicit output layer like [here](https://github.com/tensorflow/nmt/blob/master/nmt/model.py#L426), I see that that decoder outpus after ```dynamic_decode``` is the output distribution of size |V| where V is the vocab. How...