MoE-Infinity icon indicating copy to clipboard operation
MoE-Infinity copied to clipboard

Evaluating Mixtral-8×7B-Instruct-v0.1-offloading-demo on MMLU

Open AugustXuan opened this issue 2 months ago • 2 comments

Hi!I'm currently running MoE-Infinity with Mixtral-8×7B-Instruct-v0.1-offloading-demo(the quantized version) on MMLU.I encountered a failure when loading the model weights, and I’d like to know whether the MoE-Infinity algorithm is compatible with the quantized version of the Mixtral model?Thanks!!!

AugustXuan avatar Oct 23 '25 02:10 AugustXuan

do you have detailed log or concil output?

drunkcoding avatar Oct 31 '25 11:10 drunkcoding

The output reflected that,for layers 0–26, only q_proj was loaded correctly; all other parameters failed to load. Layers 27–31 had no parameters loaded at all.

AugustXuan avatar Nov 02 '25 12:11 AugustXuan