InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Expected all tensors to be on the same device exception

Open TheBarret opened this issue 2 years ago • 3 comments
trafficstars

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

Windows

GPU

cuda

VRAM

4GB

What happened?

I can make iterations just perfect in TextToImage but once i try to Outpaint in the Unified editor it throws an error

I did a clean install and could use the Unified editor, no error showed up, but once i started a second new prompt, this error persists.

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking >argument for argument mat1 in method wrapper_addmm)

Full error:

Traceback (most recent call last): File "d:\ai\invokeai\ldm\generate.py", line 486, in prompt2image results = generator.generate( File "d:\ai\invokeai\ldm\invoke\generator\base.py", line 93, in generate image = make_image(x_T) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "d:\ai\invokeai\ldm\invoke\generator\inpaint.py", line 295, in make_image samples = sampler.decode( File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "d:\ai\invokeai\ldm\models\diffusion\sampler.py", line 365, in decode outs = self.p_sample( File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "d:\ai\invokeai\ldm\models\diffusion\ddim.py", line 58, in p_sample e_t = self.invokeai_diffuser.do_diffusion_step( File "d:\ai\invokeai\ldm\models\diffusion\shared_invokeai_diffusion.py", line 88, in do_diffusion_step unconditioned_next_x, conditioned_next_x = self.apply_standard_conditioning(x, sigma, unconditioning, conditioning) File "d:\ai\invokeai\ldm\models\diffusion\shared_invokeai_diffusion.py", line 104, in apply_standard_conditioning unconditioned_next_x, conditioned_next_x = self.model_forward_callback(x_twice, sigma_twice, File "d:\ai\invokeai\ldm\models\diffusion\ddim.py", line 13, in model_forward_callback = lambda x, sigma, cond: self.model.apply_model(x, sigma, cond)) File "d:\ai\invokeai\ldm\models\diffusion\ddpm.py", line 1441, in apply_model x_recon = self.model(x_noisy, t, **cond) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "d:\ai\invokeai\ldm\models\diffusion\ddpm.py", line 2167, in forward out = self.diffusion_model(x, t, context=cc) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "d:\ai\invokeai\ldm\modules\diffusionmodules\openaimodel.py", line 798, in forward emb = self.time_embed(t_emb) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\username\anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)

Could not generate image.

Screenshots

No response

Additional context

No response

Contact Details

No response

TheBarret avatar Dec 07 '22 10:12 TheBarret

Do other models work?

psychedelicious avatar Dec 10 '22 07:12 psychedelicious

Do other models work?

Seems i was wrong to conclude that, its very weird. when i reinstalled InvokeAI, using the standard 1.5 model i could use all of the features, but when i started on a new second prompt and then using the unified editor, it gave me this error.

TheBarret avatar Dec 10 '22 11:12 TheBarret

I found something out that might shed light on this error, the error seems to stay away when you do not use the --free_gpu_mem flag in the start up. Perhaps the problem resides in this flag code.

TheBarret avatar Dec 13 '22 13:12 TheBarret

This error was fixed in #1938.

rmagur1203 avatar Dec 13 '22 18:12 rmagur1203

This error was fixed in #1938.

My bad, i wasn't aware of this fix yet, thank you letting me know!

TheBarret avatar Dec 13 '22 18:12 TheBarret