ai-toolkit icon indicating copy to clipboard operation
ai-toolkit copied to clipboard

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu

Open kikilora opened this issue 2 months ago • 11 comments

new error when trying to run a wan 2.2 lora training on default settings

kikilora avatar Oct 07 '25 13:10 kikilora

Yes, unfortunately I'm having the same problem both running and on my local machine and on RunPod.

FreelancerDDD avatar Oct 08 '25 10:10 FreelancerDDD

Same here (with local Blackwell GPU)

Gruetzi579 avatar Oct 09 '25 06:10 Gruetzi579

same problem

earetaurus avatar Oct 09 '25 13:10 earetaurus

Auto memory feature is also throwing same error 4090 with 96GB of RAM

thinker88 avatar Oct 10 '25 02:10 thinker88

The bug seems fixed, I git pulled today and I could launch a wan2.2 lora training on a local A6000PRO, VRAM consumption also seems a bit lower. However I ran only a few steps. Anyway, thank you Ostris for your work :)

Gruetzi579 avatar Oct 10 '25 06:10 Gruetzi579

I'm currently getting this with Qwen on the most current build using Auto Memory

  File "/home/me/ai-toolkit/run.py", line 120, in <module>
  File "/home/me/ai-toolkit/run.py", line 120, in <module>
        main()main()
  File "/home/me/ai-toolkit/run.py", line 108, in main
  File "/home/me/ai-toolkit/run.py", line 108, in main
        raise eraise e
  File "/home/me/ai-toolkit/run.py", line 96, in main
  File "/home/me/ai-toolkit/run.py", line 96, in main
        job.run()job.run()
  File "/home/me/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
  File "/home/me/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
        process.run()process.run()
  File "/home/me/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1996, in run
  File "/home/me/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1996, in run
        self.hook_before_train_loop()self.hook_before_train_loop()
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/DiffusionTrainer.py", line 261, in hook_before_train_loop
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/DiffusionTrainer.py", line 261, in hook_before_train_loop
        super().hook_before_train_loop()super().hook_before_train_loop()
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 256, in hook_before_train_loop
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 256, in hook_before_train_loop
        self.unconditional_embeds = self.sd.encode_prompt(self.unconditional_embeds = self.sd.encode_prompt(
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/toolkit/models/base_model.py", line 1071, in encode_prompt
  File "/home/me/ai-toolkit/toolkit/models/base_model.py", line 1071, in encode_prompt
        return self.get_prompt_embeds(prompt)return self.get_prompt_embeds(prompt)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image.py", line 350, in get_prompt_embeds
  File "/home/me/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image.py", line 350, in get_prompt_embeds
        prompt_embeds, prompt_embeds_mask = self.pipeline.encode_prompt(prompt_embeds, prompt_embeds_mask = self.pipeline.encode_prompt(
                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 254, in encode_prompt
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 254, in encode_prompt
        prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)
                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 205, in _get_qwen_prompt_embeds
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 205, in _get_qwen_prompt_embeds
        encoder_hidden_states = self.text_encoder(encoder_hidden_states = self.text_encoder(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 969, in wrapper
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 969, in wrapper
        output = func(self, *args, **kwargs)output = func(self, *args, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
        outputs = self.model(outputs = self.model(
                            ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1728, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1728, in forward
        outputs = self.language_model(outputs = self.language_model(
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1191, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1191, in forward
        layer_outputs = decoder_layer(layer_outputs = decoder_layer(
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1053, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1053, in forward
        hidden_states, self_attn_weights, present_key_value = self.self_attn(hidden_states, self_attn_weights, present_key_value = self.self_attn(
                                                                                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 938, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 938, in forward
        query_states = self.q_proj(hidden_states)query_states = self.q_proj(hidden_states)
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/nn/qlinear.py", line 45, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/nn/qlinear.py", line 45, in forward
        return torch.nn.functional.linear(input, self.qweight, bias=self.bias)return torch.nn.functional.linear(input, self.qweight, bias=self.bias)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor.py", line 90, in __torch_function__
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor.py", line 90, in __torch_function__
        return qfunc(*args, **kwargs)return qfunc(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 152, in linear
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 152, in linear
        return QTensorLinear.apply(input, other, bias)return QTensorLinear.apply(input, other, bias)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]return super().apply(*args, **kwargs)  # type: ignore[misc]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 126, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 126, in forward
        output = torch.ops.quanto.qbytes_mm(input, other._data, other._scale)output = torch.ops.quanto.qbytes_mm(input, other._data, other._scale)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/_ops.py", line 1158, in __call__
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/_ops.py", line 1158, in __call__
        return self._op(*args, **(kwargs or {}))return self._op(*args, **(kwargs or {}))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 88, in qbytes_mm_impl_cuda
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 88, in qbytes_mm_impl_cuda
        return qbytes_mm(activations, weights, output_scales)return qbytes_mm(activations, weights, output_scales)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 33, in qbytes_mm
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 33, in qbytes_mm
        return torch.matmul(activations, weights.t())return torch.matmul(activations, weights.t())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeErrorRuntimeError: : Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

GavChap avatar Oct 10 '25 14:10 GavChap

I'm currently getting this with Qwen on the most current build using Auto Memory

Experiencing the same issue, I just updated my local version and still getting the same error

thinker88 avatar Oct 11 '25 00:10 thinker88

same issue. quantization is all float8 default, and turn on Low VRAM and Layer Offloading(both 100%)

TaylorChaos avatar Oct 15 '25 00:10 TaylorChaos

I'm currently getting this with Qwen on the most current build using Auto Memory

  File "/home/me/ai-toolkit/run.py", line 120, in <module>
  File "/home/me/ai-toolkit/run.py", line 120, in <module>
        main()main()
  File "/home/me/ai-toolkit/run.py", line 108, in main
  File "/home/me/ai-toolkit/run.py", line 108, in main
        raise eraise e
  File "/home/me/ai-toolkit/run.py", line 96, in main
  File "/home/me/ai-toolkit/run.py", line 96, in main
        job.run()job.run()
  File "/home/me/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
  File "/home/me/ai-toolkit/jobs/ExtensionJob.py", line 22, in run
        process.run()process.run()
  File "/home/me/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1996, in run
  File "/home/me/ai-toolkit/jobs/process/BaseSDTrainProcess.py", line 1996, in run
        self.hook_before_train_loop()self.hook_before_train_loop()
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/DiffusionTrainer.py", line 261, in hook_before_train_loop
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/DiffusionTrainer.py", line 261, in hook_before_train_loop
        super().hook_before_train_loop()super().hook_before_train_loop()
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 256, in hook_before_train_loop
  File "/home/me/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py", line 256, in hook_before_train_loop
        self.unconditional_embeds = self.sd.encode_prompt(self.unconditional_embeds = self.sd.encode_prompt(
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/toolkit/models/base_model.py", line 1071, in encode_prompt
  File "/home/me/ai-toolkit/toolkit/models/base_model.py", line 1071, in encode_prompt
        return self.get_prompt_embeds(prompt)return self.get_prompt_embeds(prompt)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image.py", line 350, in get_prompt_embeds
  File "/home/me/ai-toolkit/extensions_built_in/diffusion_models/qwen_image/qwen_image.py", line 350, in get_prompt_embeds
        prompt_embeds, prompt_embeds_mask = self.pipeline.encode_prompt(prompt_embeds, prompt_embeds_mask = self.pipeline.encode_prompt(
                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 254, in encode_prompt
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 254, in encode_prompt
        prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)
                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 205, in _get_qwen_prompt_embeds
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py", line 205, in _get_qwen_prompt_embeds
        encoder_hidden_states = self.text_encoder(encoder_hidden_states = self.text_encoder(
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 969, in wrapper
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 969, in wrapper
        output = func(self, *args, **kwargs)output = func(self, *args, **kwargs)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1908, in forward
        outputs = self.model(outputs = self.model(
                            ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1728, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1728, in forward
        outputs = self.language_model(outputs = self.language_model(
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1191, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1191, in forward
        layer_outputs = decoder_layer(layer_outputs = decoder_layer(
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1053, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1053, in forward
        hidden_states, self_attn_weights, present_key_value = self.self_attn(hidden_states, self_attn_weights, present_key_value = self.self_attn(
                                                                                                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 938, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 938, in forward
        query_states = self.q_proj(hidden_states)query_states = self.q_proj(hidden_states)
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)return self._call_impl(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
        return forward_call(*args, **kwargs)return forward_call(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/nn/qlinear.py", line 45, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/nn/qlinear.py", line 45, in forward
        return torch.nn.functional.linear(input, self.qweight, bias=self.bias)return torch.nn.functional.linear(input, self.qweight, bias=self.bias)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor.py", line 90, in __torch_function__
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor.py", line 90, in __torch_function__
        return qfunc(*args, **kwargs)return qfunc(*args, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 152, in linear
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 152, in linear
        return QTensorLinear.apply(input, other, bias)return QTensorLinear.apply(input, other, bias)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]return super().apply(*args, **kwargs)  # type: ignore[misc]
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 126, in forward
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/tensor/qtensor_func.py", line 126, in forward
        output = torch.ops.quanto.qbytes_mm(input, other._data, other._scale)output = torch.ops.quanto.qbytes_mm(input, other._data, other._scale)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/_ops.py", line 1158, in __call__
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/torch/_ops.py", line 1158, in __call__
        return self._op(*args, **(kwargs or {}))return self._op(*args, **(kwargs or {}))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 88, in qbytes_mm_impl_cuda
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 88, in qbytes_mm_impl_cuda
        return qbytes_mm(activations, weights, output_scales)return qbytes_mm(activations, weights, output_scales)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 33, in qbytes_mm
  File "/home/me/ai-toolkit/venv/lib/python3.12/site-packages/optimum/quanto/library/qbytes_mm.py", line 33, in qbytes_mm
        return torch.matmul(activations, weights.t())return torch.matmul(activations, weights.t())
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeErrorRuntimeError: : Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm)

Try turn off the text encoder quantization to None. It works for me.

TaylorChaos avatar Oct 15 '25 01:10 TaylorChaos

Try turn off the text encoder quantization to None. It works for me.

I can confirm this worked for me too.

Kvento avatar Oct 15 '25 10:10 Kvento

I had this issue with a Flux Kontext Lora, try turning off Cache Text Embeddings That seemed to fix it for me.

mike-rowley avatar Nov 22 '25 04:11 mike-rowley