Di Jin

Results 60 comments of Di Jin

I have one question: where do you get the pre-trained embedding file? Is it trained on English Wikipedia? Thanks!

I see. According to my experiment on different embedding sources, the pre-trained embeddings has influence on the performance. I am gonna train word2vec 300 and 400 dimension on English Wikipedia...

Any updates from you on the BERT related experiments? Thank you for sharing your experience!

It is totally possible. You just need to change the dataset processing code to accommadate your own dataset.

1. number of queries is the number of times that the classifier is queried for obtaining the output probability vectors 2. In order to replicate the numbers in our paper,...

hi, I have tested both methods: removing the word or replacing it with "" and the difference is not obvious. is in the vocab so I don't think it can...

The readme file has explained how to obtain the embeddings: Run the following code to pre-compute the cosine similarity scores between word pairs based on the counter-fitting word embeddings [https://drive.google.com/file/d/1bayGomljWb6HeYDMTDKXrh0HackKtSlx/view]....

Hi, this error came out when you were loading the model parameters. Could you re-check the number of labels in the model configuration (currently you are using 2)? It should...

As mentioned in the requirements.txt file, the python version should be 3.6

When I wrote this code, I used python 3.6 so I am sure it should work.