Adriane Boyd
Adriane Boyd
The difference is that it's not looking up word forms in a table anymore, so it's not just based on an entry related to the POS or the word form....
That does sound like a bug, but I can't reproduce it from the info above. When you disable the parser and then try to use `noun_chunks`, you should get this...
Ah, I assumed that since nothing has changed with noun chunks and tested with v3.3.0, which does work. This is related to #10003 and has to do with rules in...
We really appreciate the feedback about the experimental components! I don't think it's intentional to support sentences with multiple roots. The spacy `Doc` object can only support one head per...
It's useful to have this example in the issue tracker in case anyone runs into the same problem, but I don't think we want to add more complicated handling when...
Edited: sorry, this was intended to be a comment on the PR instead of the issue. I worry that this may be too breaking for v3. In general I do...
The lookup tables, while sometimes better than nothing, are pretty terrible. They don't take any context into account and are very unpredictable / brittle. Many adjectives ending in `-e` are...
A PR for this would be great! You might want to get in touch with Guadalupe Romero (@guadi1994), who has started working on this for Spanish and German. The rule-based...
Yes, we're hoping to be able to include the edit tree lemmatizer in an upcoming release (probably v3.3). There are still cases where a lookup table can make sense, so...
Can you pinpoint which value in `nlp.meta` is `NaN` by printing/inspecting it right before the error? It would be right before this line: https://github.com/explosion/spaCy/blob/deb143fa709461ea6b8fddd17006908f7bea7f55/spacy/language.py#L1979