Eric Kafe
Eric Kafe
@sylee957, with an infinite grammar, you can use the _depth_ parameter to limit the recursion depth: ``` gen = generate(G, depth=10) print(next(gen)) ``` ['a', 'a', 'a', 'a', 'a', 'a', 'a',...
@sylee957 actually, given a finite value for 'depth' lower than the recursion limit, the _generate_ function does reach all the allowable productions. Since it concatenates lists and not strings, it...
For example: ``` n = 5 gen = generate(G, depth=n) for _i in range(n): print(next(gen)) ``` ['a', 'a', 'a', 'a'] ['a', 'a', 'a'] ['a', 'a'] ['a'] []
So the real problem seems to be that, at line 29, _generate.py_ sets a default depth that is much higher than the recursion limit: ``` if depth is None: depth...
NLTK's stopwords lists come from the Snowball project, but someone added aberrant forms like "ayantes" to the French list. An easy solution could be to just go back to [the...
Unzipping corpora is not a fix, but a temporary workaround to handle occasional cases when a corpus reader does not properly handle zipped data. As it is increasingly common to...
Thanks @goodmami, I have corrected the formulation, since I don't want to imply that something is wrong with Wn. On the other hand, there is a problem in Wn, due...
Thanks @goodmami, yes your explanations help a lot indeed. Concerning my specific problem, i.e. obtaining translations for synsets that would have one according to the CILI, but had none when...
As @goodmami wrote: > what if you translate in the other direction where the single ILI is "split" into two? Yes, the inverse problem is that currently, when translating in...
There are relatively few (around 30) merged synsets between each English Wordnet version, so losing 30 synsets in translation may not seem a huge problem. However, it is not solved...