Shamane Siri
Shamane Siri
Here we are using **embedding_attention_seq2seq** where it will automatically embed our vocab set. I need to visualize it using tensor-board with names but not with number tags. Is there anything...
while using this **tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq** can we still implement bidirectional encoder ?
Will it increase accuracy since padding for too long can reduce information.
The [documentation](https://pytorch-forecasting.readthedocs.io/en/latest/models.html) TFN can also be use for classification tasks. How I can proceed with this?
Here **def train_net(model, params, weights, path, trainFrames, i):** why did you use an i?
This [paper (Show, Attend and Tell: Neural Image Caption Generation with Visual Attention)](https://arxiv.org/pdf/1502.03044.pdf) mention about the hard attention . But then we cannot use standard backprop because of the non...
Similar to the **transform_spec** function, is there a way to input a custom collator function?