stopes icon indicating copy to clipboard operation
stopes copied to clipboard

Using stopes with an unseen language

Open sete-nay opened this issue 3 years ago • 5 comments

Hi, I'm trying to clean and preprocess bitext for finetuning NLLB on a new unseen language. The source language is a part of laser3, the target language is not included. Will it work if I replace laser3 with BPE encoder pre-trained on my target language? Thank you!

python -m stopes.pipelines.bitext.global_mining_pipeline src_lang=fuv tgt_lang=zul demo_dir=.../stopes-repo/demo +preset=demo output_dir=. embed_text=laser3

sete-nay avatar Nov 02 '22 12:11 sete-nay

the laser3 encoder will project your text into an embedding space that is language independent. The way that mining works is that it aligns projections of the sentences from the src_lang into that space, with projections of the sentences from the tgt_lang into that space. This works because they are projected in the same language independent space and we can compute a distance between the embeddings of each sentences.

If you use a different encoder, it will probably not project into a compatible space.

Mortimerp9 avatar Nov 02 '22 12:11 Mortimerp9

Thanks, will try it with laser3. What should I indicate in tgt_lang for the unseen language?

sete-nay avatar Nov 02 '22 13:11 sete-nay

What should I indicate in tgt_lang for the unseen language?

You can assign any name you want to the new language. If this name is abc, then you will need to indicate tgt_lang=abc in the entry command.

Also, you need to make sure that the mining config is correctly showing how to find source files for that language. In case of using the demo config (+preset=demo in your command, which corresponds to this configuration), you will need to have the following two files:

  1. $demo_dir/abc.gz with the source text in your language.
  2. $demo_dir/abc.nl with the number of lines of the file above.

Finally, you will need to add the path to your custom encoder (and its vocabulary, if it is also custom) to the lang_configs part of the demo config.

avidale avatar Nov 02 '22 14:11 avidale

Hi, I'm trying to clean and preprocess bitext for finetuning NLLB on a new unseen language. The source language is a part of laser3, the target language is not included. Will it work if I replace laser3 with BPE encoder pre-trained on my target language? Thank you!

python -m stopes.pipelines.bitext.global_mining_pipeline src_lang=fuv tgt_lang=zul demo_dir=.../stopes-repo/demo +preset=demo output_dir=. embed_text=laser3

Hi @sete-nay, out of curiosity what is your tgt_lang? LASER3 + LASER2 covers over 200 languages. If the target lang isn't covered by LASER3, it may be included in LASER2. You can find the list of supported languages for LASER2 here. If it's not in either of them, you could even try to create your own LASER3 encoder and mine using this. The training code to do so is here.

heffernankevin avatar Nov 02 '22 14:11 heffernankevin

Hi @heffernankevin, my tgt_lang is Circassian (Kabardian) and not a part of laser2 or 3, unfortunately. Thanks for the hint, will look into laser encoder training or otherwise just use a simpler tool. My goal is to create parallel corpus that can be used for finetuning NLLB or another multilingual model on Circassian language.

sete-nay avatar Nov 02 '22 14:11 sete-nay