PyTorch-NLP icon indicating copy to clipboard operation
PyTorch-NLP copied to clipboard

Basic Utilities for PyTorch Natural Language Processing (NLP)

Results 24 PyTorch-NLP issues
Sort by recently updated
recently updated
newest added

There is a small typo in torchnlp/encoders/text/subword_text_tokenizer.py. Should read `expressed` rather than `experessed`. Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

Issue #105 and #120 pass the language as an argument rather than as kwargs. setting the default language value to 'en_core_web_sm'.

I wanted to install this package in anaconda with "conda install torchnlp", but it came out with the "PackagesNotFoundError" notion, how can I install it in anaconda?

## Expected Behavior - I tried to follow example of pytorch nlp documentation with wmt14 dataset. (https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html) - download wmt dataset successfully ## Actual Behavior - wmt_dataset [DOWNLOAD_FAILED] occurs. ##...

On v0.5.0, this code fails: ```python from torchnlp.encoders.text import SpacyEncoder encoder = SpacyEncoder([], language="fr") ``` with: ```shell TypeError Traceback (most recent call last) in 1 from torchnlp.encoders.text import SpacyEncoder 2...

## Expected Behavior from torchnlp.encoders.text import SpacyEncoder encoder = SpacyEncoder(["This ain't funny.", "Don't?"], language='en ## Actual Behavior TypeError: __init__() got an unexpected keyword argument 'language'

Hello, I was hoping to get some pointers related to the below query. I want to apply gating to my inputs using attention such that only important ones go forward...

Tokenizer Reference: https://github.com/eladhoffer/seq2seq.pytorch/blob/master/seq2seq/tools/tokenizer.py

enhancement
help wanted
good first issue

Line: https://github.com/PetrochukM/PyTorch-NLP/blob/be1a08edbd898d30273b7062c9c16fcf7f788787/torchnlp/download.py#L126 Instead of `check_files`, we could check that particular path exist. That'll enable us to check for particular directories, as well.

What if I want to use own pretrained fasttext model (or even commoncrawl model instead of standard wiki one)? E.g. look what they publish now: https://fasttext.cc/docs/en/crawl-vectors.html. Current FastText impl ```...

enhancement
help wanted
good first issue