[Misc] Add CustomOp Interface to UnquantizedFusedMoEMethod
Currently, UnquantizedFusedMoEMethod directly imports the Triton fused MoE kernel and related CUDA kernels, preventing other hardware backends from supporting MoE models. This PR adds the CustomOp interface to it so that the kernels are imported only for NVIDIA and AMD GPUs.
Does this need to be added to the fp8 method as well? Or are we handling quantization separately?
https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/quantization/fp8.py#L220
@robertgshaw2-neuralmagic We haven't used the CustomOp interface for the quantization-related ops, since they usually only support NVIDIA or AMD GPUs. Do you want to apply the interface to the quant ops?
@robertgshaw2-neuralmagic We haven't used the
CustomOpinterface for the quantization-related ops, since they usually only support NVIDIA or AMD GPUs. Do you want to apply the interface to the quant ops?
I think its okay to leave it for now and make the modifications once we have a need for it
This PR seems to break Mixtral. Let me check the reason.
What TP is it running at? @WoosukKwon
@comaniac Could you please take a look? The PR removes a few lines of code in model loader that you marked as FIXME.
@comaniac Could you please take a look? The PR removes a few lines of code in model loader that you marked as
FIXME.
That FIXME should be removed safely. Please let me know if the test still fails and I'll take a look.
@comaniac Thanks for the confirmation! It works well.