Ben Bolte
Ben Bolte
It seems like there's an encoding issue with the dataset. Maybe [this] answer on StackOverflow helps?
I think `_eval_sets` should have a leading underscore, as in `evaluator._eval_sets = dict([('dev', evaluator.load('test1')) ]) evaluator.load_epoch(model, 54) evaluator.get_mrr(model, evaluate_all=True)`. The way I wrote it is so that class variables have...
I put that model in the `seq2seq` folder. It was kind of an experimental model where I generated answers for each question using an RNN model like Karpathy's char-rnn, then...
Hmm that seems pretty interesting, let me know how it goes. Embeddings can be trained in different ways, or you can use the ones I provided. You can make your...
Keras will train its own word embeddings, it just works better if you start with word2vec embeddings (you can choose whether or not they are trained as well). I think...
Did you download the dataset from [here](https://github.com/codekansas/insurance_qa_python)? I'm not sure which resource it could be. Could you reproduce the error message? Yep, the `word2vec_100_dim.h5` was the output of using Gensim's...
I went ahead and added the word embeddings I've been using to Github
`syn0` is the equivalent of the Keras embedding layer I believe, that's what I've been using. It's really these lines: ``` weights = np.load('word2vec_100_dim.embeddings') language_model = model.prediction_model.layers[2] language_model.layers[2].set_weights([weights]) ```
It might be different depending on your version
I get the sense that it has something to do with finely tuning the hyperparameters. Or maybe they used better pre-trained embeddings... The best result I've gotten so far was...