Edoardo Barba

Results 12 comments of Edoardo Barba

It should be pretty easy since lemma+synset=sense basically. You can build the index from the wordnet interface of nltk.

Basically, we partitioned the "ALL" datasets from the Raganato et al. 2017 framework using the Part of Speech tags of the instances.

You can use the evaluation script. It is explained in the README. I hope I got the question right this time 😄

Ok, let's see if I can be more clear xD. 1. **You** have to partition the ALL dataset by yourself using the PoS of each instance (If I find the...

Hey, as you can read in the README, you can download the datasets (or find the URL where to download them) in the setup.sh script.

You don't need to generate the precise ids, if you want a lemma to be predicted then, you give to it the tag "instance" otherwise the tag "wf". The only...

Nope, but you must be sure that the word (actually the lemma) with the relative POS is in the Wordnet Inventory.

Yes, just one instance. There are examples in the datasets if you wants to be sure.

Hey, sorry for the late response. Did you use the `setup.sh` script to set up your environment?