llama-recipes
llama-recipes copied to clipboard
upgrade typing_extensions version
System Info
PyTorch: 2.3 Cuda: 12.1
Information
- [ ] The official example scripts
- [ ] My own modified scripts
🐛 Describe the bug
I got error when i ran the command generated from python prepare_meta_eval.py --config_path ./eval_config.yaml
. The root cause is typing-extensions==4.8.0, but vllm is based on typing_extensions >= 4.10.
Error logs
lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=4,max_model_len=8192,add_bos_token=True,seed=42 --tasks meta_instruct --batch_size auto --output_path eval_results --include_path /home/ubuntu/llama-recipes/tools/benchmarks/llm_eval_harness/meta_eval_reproduce/work_dir --seed 42 --log_samples
cannot import name 'TypeIs' from 'typing_extensions'
Expected behavior
the following command can run successfully.
lm_eval --model vllm --model_args pretrained=meta-llama/Meta-Llama-3.1-8B-Instruct,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=4,max_model_len=8192,add_bos_token=True,seed=42 --tasks meta_instruct --batch_size auto --output_path eval_results --include_path /home/ubuntu/llama-recipes/tools/benchmarks/llm_eval_harness/meta_eval_reproduce/work_dir --seed 42 --log_samples