lm-evaluation-harness icon indicating copy to clipboard operation
lm-evaluation-harness copied to clipboard

translation evalution error

Open laozhanghahaha opened this issue 1 year ago • 0 comments

Hey I'm trying to evaluate bloom1b7 on translation task with this command python main.py --model_api_name hf-causal --model_args pretrained=bigscience/bloom-1b7 --task_name flores_101_mt_fewshot_en2bn --device cuda:1

But I got this error

Traceback (most recent call last):
  File "/mnt/cephfs2/projects/dialogue/LLM/lm-evaluation-harness-ml/main.py", line 215, in <module>
    main()
  File "/mnt/cephfs2/projects/dialogue/LLM/lm-evaluation-harness-ml/main.py", line 197, in main
    results = evaluator.cli_evaluate(**evaluate_args)
  File "/mnt/cephfs2/projects/dialogue/LLM/lm-evaluation-harness-ml/lm_eval/evaluator.py", line 77, in cli_evaluate
    tasks = lm_eval.tasks.get_task_list_from_args_string(
  File "/mnt/cephfs2/projects/dialogue/LLM/lm-evaluation-harness-ml/lm_eval/tasks/__init__.py", line 327, in get_task_list_from_args_string
    return get_task_list(task_name, template_names, **kwargs)
  File "/mnt/cephfs2/projects/dialogue/LLM/lm-evaluation-harness-ml/lm_eval/tasks/__init__.py", line 281, in get_task_list
    assert template_names, "Must specify at least one template name"
AssertionError: Must specify at least one template name

So to get a proper template name I try to use this function lm_eval.list_templates()provided in READ.ME

>>> import lm_eval
>>> lm_eval.list_templates("flores_101_mt_fewshot_en2bn")
WARNING:root:Tried instantiating `DatasetTemplates` for gsarti/flores_101/all, but no prompts found. Please ignore this warning if you are creating new prompts for this dataset.
[]

It returns a empty list. Where could I get a template for flores_101_mt_fewshot_en2bn task

laozhanghahaha avatar Apr 24 '23 09:04 laozhanghahaha