stopes icon indicating copy to clipboard operation
stopes copied to clipboard

Prepare new data for NLLB-200

Open ibtiRaj opened this issue 2 years ago • 9 comments

Hi, I'm trying to fine tune NLLB-200 model on new bilingual data. So I need to prepare my data using prepare_data pipeline: https://github.com/facebookresearch/stopes/tree/main/stopes/pipelines/prepare_data there are my configs file: image

image

image

My output directory is the following:
image

But I encountered a problem when fine tuning NLLb-200: File "/home/admin/khadija/fairseq/slurm_snapshot_code/2023-02-08T14_51_26.242208/fairseq/data/dictionary.py", line 238, in add_from_file with open(PathManager.get_local_path(f), "r", encoding="utf-8") as fd: FileNotFoundError: [Errno 2] No such file or directory: '/home/admin/khadija/prepare_data_output/data_bin/shard000/dict.ary_Arab.txt' srun: error: slurmnode1: tasks 0-2: Exited with exit code 1

Is Fairseq compatible with the new version of Stopes? @Mortimerp9 @kauterry @gwenzek Can you help me please?

ibtiRaj avatar Feb 09 '23 11:02 ibtiRaj

@ibtiRaj have you solve your problem?

robotsp avatar Feb 20 '23 08:02 robotsp

@robotsp No, I didn't, I'm sorry.

ibtiRaj avatar Feb 20 '23 09:02 ibtiRaj

@robotsp No, I didn't, I'm sorry.

No worries. BTW, may I ask the model file and vocab file in your configs, are they the same as the original ones from NLLB? I just downloaded one from https://github.com/facebookresearch/fairseq/tree/nllb. But the size of vocabulary is 255997 which is different from your 256200. I wonder why? @ibtiRaj

robotsp avatar Feb 20 '23 09:02 robotsp

@robotsp Yes, you are right, the vocabulary size is 255997 but when I run the fine tuning, I get a vocabulary size mismatch error :

image

That's why I thought of adding 200 tokens to the original vocabulary.

ibtiRaj avatar Feb 21 '23 09:02 ibtiRaj

Hi @ibtiRaj! The stopes/pipelines/prepare_data pipeline has been completely refactored. Could you pull the latest version of the code and change your config format to be compatible with the new code? Here is the README explaining how to write a prepare_data config: https://github.com/facebookresearch/stopes/tree/main/stopes/pipelines/prepare_data

Once you re-prepare your data with the latest code and the changed config, let me know if you still face any issues.

kauterry avatar Feb 21 '23 18:02 kauterry

@kauterry Would you please have a look at https://github.com/facebookresearch/fairseq/issues/4989 I prepare my data with the latest code of stopes and the changed config, but came across the new issue of "Can't instantiate abstract class TrainModule with abstract methods requirements".

robotsp avatar Feb 22 '23 04:02 robotsp

Hi @kauterry , thank you for your answer.

When I prepare my data with the new version of stopes, I always get two errors:

  • The first one is the same as in this issue https://github.com/facebookresearch/fairseq/issues/4989. I solved this error by using the old version of stopes.
  • The second is the following :

image

what do you think?

And what about the mismatch error, is it true that 200 new words can be added to the original vocabulary?

ibtiRaj avatar Feb 22 '23 08:02 ibtiRaj

I don't found nllb module in fairseq/examples of the version ==0.12.1 that recommended by the new version of Stopes (https://github.com/facebookresearch/stopes/tree/main). But when I reinstalled the nllb version of fairseq. Some conflicts of between hydra-core and fairseq occur. I think this is the root cause. Do you know why? @kauterry @ibtiRaj

robotsp avatar Feb 22 '23 12:02 robotsp

hi @robotsp, I solved the problem by following the NLLB installation guide here: https://github.com/facebookresearch/fairseq/blob/nllb/INSTALL.md.

ibtiRaj avatar Feb 22 '23 14:02 ibtiRaj