intel-extension-for-transformers
intel-extension-for-transformers copied to clipboard
core dumped
trafficstars
I tried to use intel-extension-for-transformers to infer the qwen7b model execution and encountered the following error,I can't fix it. Please help analyze it. ...................................................................................... model_init_from_file: support_bestla_kv = 0 model_init_from_file: kv self size = 256.00 MB Once upon a time, there existed a little NE_ASSERT: /root/w0/workspace/neuralspeed-wheel-build/nlp_repo/neural_speed/core/ne_layers.c:2651: ne_nelements(a) == ne0 * ne1 * ne2 Aborted (core dumped)