wanda
wanda copied to clipboard
AttributeError: module 'lm_eval.tasks' has no attribute 'ALL_TASKS'
met
Traceback (most recent call last):
File "/mnt/sharedata/hdd/zhouxn/wanda/main.py", line 112, in <module>
main()
File "/mnt/sharedata/hdd/zhouxn/wanda/main.py", line 102, in main
results = eval_zero_shot(args.model, model, tokenizer, task_list, num_shot, accelerate)
File "/mnt/sharedata/hdd/zhouxn/wanda/lib/eval.py", line 142, in eval_zero_shot
task_names = pattern_match(task_list, tasks.ALL_TASKS)
AttributeError: module 'lm_eval.tasks' has no attribute 'ALL_TASKS'
when running eval_zero_shot. Can you share the version of lm_eval libaray or tell me how to fix this?
Hello, have you solved it yet? Can you tell me how to solve it? thank you!
I think it is a lm_eval version problem, I modified the code as follows, but encountered other problems
def eval_zero_shot(model_name, model, tokenizer, task_list=["boolq","rte","hellaswag","winogrande","arc_challenge","arc_easy","openbookqa"],
num_fewshot=0, use_accelerate=False, add_special_tokens=False):
from lm_eval import tasks, evaluator
from lm_eval.tasks import TaskManager
tm = TaskManager()
def pattern_match(patterns, source_list):
task_names = set()
for pattern in patterns:
for matching in fnmatch.filter(source_list, pattern):
task_names.add(matching)
return list(task_names)
task_names = pattern_match(task_list, tm.all_tasks)
model_args = f"pretrained={model_name},cache_dir=./llm_weights"
limit = None
if "70b" in model_name or "65b" in model_name:
limit = 2000
if use_accelerate:
model_args = f"pretrained={model_name},cache_dir=./llm_weights,use_accelerate=True"