opencompass
opencompass copied to clipboard
meta-llama/Meta-Llama-3-8B-Instruct evaluated results is not consistent with hugging face's official results
先决条件
问题类型
我正在使用官方支持的任务/模型/数据集进行评估。
环境
python -c "import opencompass.utils;import pprint;pprint.pprint(dict(opencompass.utils.collect_env()))" {'CUDA available': True, 'CUDA_HOME': '/usr/local/cuda', 'GCC': 'gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0', 'GPU 0,1,2,3': 'NVIDIA GeForce RTX 4090', 'MMEngine': '0.10.4', 'MUSA available': False, 'NVCC': 'Cuda compilation tools, release 12.4, V12.4.131', 'OpenCV': '4.10.0', 'PyTorch': '2.2.2+cu121', 'PyTorch compiling details': 'PyTorch built with:\n' ' - GCC 9.3\n' ' - C++ Version: 201703\n' ' - Intel(R) oneAPI Math Kernel Library Version ' '2022.2-Product Build 20220804 for Intel(R) 64 ' 'architecture applications\n' ' - Intel(R) MKL-DNN v3.3.2 (Git Hash ' '2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)\n' ' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n' ' - LAPACK is enabled (usually provided by ' 'MKL)\n' ' - NNPACK is enabled\n' ' - CPU capability usage: AVX2\n' ' - CUDA Runtime 12.1\n' ' - NVCC architecture flags: ' '-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n' ' - CuDNN 8.9.2\n' ' - Magma 2.6.1\n' ' - Build settings: BLAS_INFO=mkl, ' 'BUILD_TYPE=Release, CUDA_VERSION=12.1, ' 'CUDNN_VERSION=8.9.2, ' 'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, ' 'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 ' '-fabi-version=11 -fvisibility-inlines-hidden ' '-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO ' '-DLIBKINETO_NOROCTRACER -DUSE_FBGEMM ' '-DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK ' '-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE ' '-O2 -fPIC -Wall -Wextra -Werror=return-type ' '-Werror=non-virtual-dtor -Werror=bool-operation ' '-Wnarrowing -Wno-missing-field-initializers ' '-Wno-type-limits -Wno-array-bounds ' '-Wno-unknown-pragmas -Wno-unused-parameter ' '-Wno-unused-function -Wno-unused-result ' '-Wno-strict-overflow -Wno-strict-aliasing ' '-Wno-stringop-overflow -Wsuggest-override ' '-Wno-psabi -Wno-error=pedantic ' '-Wno-error=old-style-cast -Wno-missing-braces ' '-fdiagnostics-color=always -faligned-new ' '-Wno-unused-but-set-variable ' '-Wno-maybe-uninitialized -fno-math-errno ' '-fno-trapping-math -Werror=format ' '-Wno-stringop-overflow, LAPACK_INFO=mkl, ' 'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, ' 'PERF_WITH_AVX512=1, TORCH_VERSION=2.2.2, ' 'USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, ' 'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, ' 'USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, ' 'USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, ' 'USE_ROCM_KERNEL_ASSERT=OFF, \n', 'Python': '3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0]', 'TorchVision': '0.17.2+cu121', 'numpy_random_seed': 2147483648, 'opencompass': '0.2.5+', 'sys.platform': 'linux'}
重现问题 - 代码/配置示例
meta-llama/Meta-Llama-3-8B-Instruct evaluated results is not consistent with hugging face's official results: especially in GSM-8K_gen \winogrande_gen :
the script as follows:
CUDA_VISIBLE_DEVICES=1 python run.py --datasets winogrande_gen --hf-type chat --hf-path /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/c4a54320a52ed5f88b7a2f84496903ea4ff07b45/ --hf-num-gpus 1 --max-seq-len 2048 --max-out-len 256 -a lmdeploy -r --dump-eval-details --model-kwargs device_map='auto' trust_remote_code=True temperature=0.01 do_sample=True
CUDA_VISIBLE_DEVICES=1,2 python run.py --datasets gsm8k_gen --hf-type chat --hf-path /home/ubuntu/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/c4a54320a52ed5f88b7a2f84496903ea4ff07b45/ --hf-num-gpus 2 --max-seq-len 2048 --max-out-len 256 -a lmdeploy -r --dump-eval-details --model-kwargs device_map='auto' trust_remote_code=True
The results:
winogrande_gen: Meta-Llama-3-8B-Instruct, 63.69
GSM8K: meta-llama--Meta-Llama-3-8B-Instruct 79.00
Hugging face results:
winogrande: Meta-Llama-3-8B-Instruct, 68
GSM8K: meta-llama--Meta-Llama-3-8B-Instruct 74
重现问题 - 命令或脚本
As code upside this
重现问题 - 错误信息
nothing
其他信息
No response