EasyNMT icon indicating copy to clipboard operation
EasyNMT copied to clipboard

Offline installation

Open w3aryb0arpig opened this issue 3 years ago • 14 comments

I am trying to install easyNMT on a semi-offline system. Python libraries are permitted to be installed but accessing other URLs is not permitted from this system. Therefore, can you advise on a way of how to manually install the Facebook (m2m_100_418M and m2m_100_1.2B) models so that easyNMT can see them? I can see they can be manually downloaded from Huggingface.co. If I were to download these on a second system, where should I then save them on the semi-offline Windows system on which I intend to test eastNMT? Thanks

w3aryb0arpig avatar Oct 20 '21 11:10 w3aryb0arpig

Hi @w3aryb0arpig

Download all the files from the hub. I think the following code should then work:

from easynmt import EasyNMT, models
model = EasyNMT(translator=models.AutoModel("path/to/model"))

nreimers avatar Oct 20 '21 12:10 nreimers

@nreimers this works a treat but I hit another issue with fasttext being called by the script:-

    print("=> detected language:", MODEL.language_detection(sentence), "\n")
  File "C:\Program Files\Python38\lib\site-packages\easynmt\EasyNMT.py", line 413, in language_detection
    raise Exception("No method for automatic language detection was found. Please install at least one of the following: fasttext (pip install fasttext), langid (pip install langid), or langdetect (pip install langdetect)")
Exception: No method for automatic language detection was found. Please install at least one of the following: fasttext (pip install fasttext), langid (pip install langid), or langdetect (pip install langdetect)

I actually have separate code in my script using fasttext successfully and it is installed. I think the issue lies in the fact that as this is an offline system, the lid.176.ftz_ file isn't in the correct path neither for where the easynmt.py script is looking for it.

Looking at the easyNMT.py, the relevant segment appears to be:-


        if self._fasttext_lang_id is None:
            import fasttext
            fasttext.FastText.eprint = lambda x: None   #Silence useless warning: https://github.com/facebookresearch/fastText/issues/1067
            model_path = os.path.join(self._cache_folder, 'lid.176.ftz')

So it would appear that _cache_folder is where the file should be stored. I can't work out where this _cache_folder is though? I did find the below but couldn't work out where this related to:-


        if cache_folder is None:
            if 'EASYNMT_CACHE' in os.environ:
                cache_folder = os.environ['EASYNMT_CACHE']
            else:
                cache_folder = os.path.join(torch.hub._get_torch_home(), 'easynmt_v2')
        self._cache_folder = cache_folder

Any ideas @nreimers where I can store the lid.176.ftz on a Windows system manually?

w3aryb0arpig avatar Oct 20 '21 17:10 w3aryb0arpig

You can set the environment variable EASYNMT_CACHE to point to any folder you like.

Otherwise, when you call translate and pass the source_lang, then no automatic language detection happens. So you could wrap your translate function, that you do the language detection (if needed) and pass it to model.translate

nreimers avatar Oct 20 '21 18:10 nreimers

Further to the above, I managed to address this by using the cache_folder parameter:-

MODEL = EasyNMT(translator=models.AutoModel("models//facebook//m2m100_418M"), cache_folder='cached')

w3aryb0arpig avatar Oct 20 '21 19:10 w3aryb0arpig

Is there the way to download all models one time from hub by any CLI? to specific path= ?

Tortoise17 avatar Oct 29 '21 11:10 Tortoise17

@Tortoise17 You can use standard git checkout to get the model files:

git clone https://huggingface.co/facebook/m2m100_1.2B

will download all files from https://huggingface.co/facebook/m2m100_1.2B

nreimers avatar Oct 29 '21 15:10 nreimers

Thank you. I hope that it has no limitations in terms of usage of text (except the size of the text chunk per input cli).

Tortoise17 avatar Oct 29 '21 15:10 Tortoise17

Sadly I don't understand what you mean

nreimers avatar Oct 29 '21 17:10 nreimers

Is there any limit in the use of the model for translation when the transformer is used with the use of internet?? Limit means the characters per month or per day or CLI use per day limit?

Tortoise17 avatar Oct 29 '21 19:10 Tortoise17

No, there is no such limit

nreimers avatar Oct 29 '21 20:10 nreimers

@Tortoise17 You can use standard git checkout to get the model files:

git clone https://huggingface.co/facebook/m2m100_1.2B

will download all files from https://huggingface.co/facebook/m2m100_1.2B

Thank you. and how to get all Opus-mt models download by CLI?

Tortoise17 avatar Oct 31 '21 18:10 Tortoise17

You have to download each individually

nreimers avatar Nov 01 '21 08:11 nreimers

I am trying to keep the model downloaded in the local disk as I have no space in native cache. I am trying like this, can you guide how this model I can save locally and than load from locally ?

I cloned and git-lfs installed. than

from easynmt import EasyNMT, models
model = EasyNMT(translator=models.AutoModel("/home/translate/m2m100_418M")), cache_folder='/home/translate/cached')

First it demaded the easynmt.json in m2m100_418M folder.

If you can guide once for opus-mt and m2m100_418M model. I assume that the way should be same for both.

I am confused where is the mistake.

Tortoise17 avatar Nov 01 '21 09:11 Tortoise17

@nreimers this works a treat but I hit another issue with fasttext being called by the script:-

    print("=> detected language:", MODEL.language_detection(sentence), "\n")
  File "C:\Program Files\Python38\lib\site-packages\easynmt\EasyNMT.py", line 413, in language_detection
    raise Exception("No method for automatic language detection was found. Please install at least one of the following: fasttext (pip install fasttext), langid (pip install langid), or langdetect (pip install langdetect)")
Exception: No method for automatic language detection was found. Please install at least one of the following: fasttext (pip install fasttext), langid (pip install langid), or langdetect (pip install langdetect)

I actually have separate code in my script using fasttext successfully and it is installed. I think the issue lies in the fact that as this is an offline system, the lid.176.ftz_ file isn't in the correct path neither for where the easynmt.py script is looking for it.

Looking at the easyNMT.py, the relevant segment appears to be:-


        if self._fasttext_lang_id is None:
            import fasttext
            fasttext.FastText.eprint = lambda x: None   #Silence useless warning: https://github.com/facebookresearch/fastText/issues/1067
            model_path = os.path.join(self._cache_folder, 'lid.176.ftz')

So it would appear that _cache_folder is where the file should be stored. I can't work out where this _cache_folder is though? I did find the below but couldn't work out where this related to:-


        if cache_folder is None:
            if 'EASYNMT_CACHE' in os.environ:
                cache_folder = os.environ['EASYNMT_CACHE']
            else:
                cache_folder = os.path.join(torch.hub._get_torch_home(), 'easynmt_v2')
        self._cache_folder = cache_folder

Any ideas @nreimers where I can store the lid.176.ftz on a Windows system manually?

Hi, I was getting the same issue, you can store the ftz file in the same cache folder, after which there would be two outcomes,

  1. Your model is working
  2. You are getting the same error

for me it was completely random so what I did was, model_path = os.path.join(self._cache_folder, 'lid.176.ftz'), I harded the path, which is obviously not the right way but I also had a restricted machine and I couldn't find a work-around with my limited knowledge.

Hope this helps

ArchanGhosh avatar Dec 21 '21 05:12 ArchanGhosh