fairseq2
fairseq2 copied to clipboard
Add support for huggingface ASR models in `hg` recipes
What does this PR do? Please describe:
This PR builds upon #740 by integration huggingface ASR models with the hg evaluation CLI interface. I have added ModelConfig and utilized hydra-core to dynamically load the relevant classes.
Note: This PR depends on some changes that were made in #749. This PR's contributions starts from this commit d80d154354b4d9fbbacac9955ab3360018c45fdd
for Example:
- Loading whisper model:
$ fairseq2 hg asr /tmp/fairseq2 --config max_samples=2 model_config.model_name=openai/whisper-tiny.en model_config.dtype=torch.float32 model_config.model_class=WhisperForConditionalGeneration model_config.tokenizer_class=WhisperProcessor model_config.tokenizer_name=openai/whisper-tiny.en
...
[08/16/24 20:28:09] INFO fairseq2.recipes.hg.evaluator - Eval Metrics - BLEU: 0.577036 | Elapsed Time: 1s | Wall Time: 2s | brevity_penalty: 1.0 | length_ratio: 1.15 | precisions:
[0.7391304347826086, 0.6363636363636364, 0.5238095238095238, 0.45] | reference_length: 40 | translation_length: 46
INFO fairseq2.recipes.hg.evaluator - Evaluation complete in 2 seconds
Does your PR introduce any breaking changes? If yes, please list them: None.
Check list:
- [x] Was the content of this PR discussed and approved via a GitHub issue? (no need for typos or documentation improvements)
- [x] Did you read the contributor guideline?
- [X] Did you make sure that your PR does only one thing instead of bundling different changes together?
- [x] Did you make sure to update the documentation with your changes? (if necessary)
- [x] Did you write any new necessary tests?
- [x] Did you verify new and existing tests pass locally with your changes?
- [X] Did you update the CHANGELOG? (no need for typos, documentation, or minor internal changes)
Looks like this PR also includes the code changes of #749. One fundamental question though. If we are using HG dataset and evaluate HG models in this recipe, what is the benefit of using fairseq2 instead of just HG eval?
Looks like this PR also includes the code changes of #749. One fundamental question though. If we are using HG dataset and evaluate HG models in this recipe, what is the benefit of using fairseq2 instead of just HG eval?
This work is intended to create a CLI interface eval (before it got renamed to hg) for evaluating both Fairseq2 (primarily) and Hugging Face models using Hugging Face datasets and metrics. The interface is designed for downstream model evaluation rather than for ongoing model training.
Support for more models, tasks and metrics (like blaser) will come in future PRs.