Bing Liu
Bing Liu
Hi @Peydon, the code here implements the attention BiRNN model described in the paper. The encoder-decoder model can be implemented with some straightforward changes from this code.
Thanks for trying out this sample code. When doing model tuning, typically I will first look at the training curve and validation curve, to see whether the model overfits or...
@hariom-yadaw Slot labels for words that do not appear in training set might be inferred from the structure in the sequence, e.g. structure like "flight from A to B". Pre-trained...
Hi @pfllo, thanks for trying out the code! The sample code here demonstrates how we introduced attention to tagging task. As mentioned in "generate_embedding_RNN_output.py", this published code here does not...
Thanks for the interest in our work. I believe the intent classification accuracy is not likely to be improved much with the modeling of label dependencies. The above results posted...
We used cross validation for hyper-parameter tuning. During final model evaluation, we used the full training set (4978 training samples) from the original ATIS train/test splits. During data preprocessing, we...
This is a good point. We simply used the first intent label as the true label during data preprocessing, for both model training and testing. There are in total 15...
Hi @dianamurgulet, the "create_mode" function finds the model checkpoint from FLAGS.train_dir with ckpt = tf.train.get_checkpoint_state(FLAGS.train_dir). If you have a pre-trained model, just pass the model directory to this get_checkpoint_state function...
@bringtree Thanks for pointing this out. I have just pushed a fix.