Auto-Photoshop-StableDiffusion-Plugin
Auto-Photoshop-StableDiffusion-Plugin copied to clipboard
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
ERROR: Exception in ASGI application Traceback (most recent call last): File "E:\Stable Diffusion\py310\lib\site-packages\anyio\streams\memory.py", line 94, in receive return self.receive_nowait() File "E:\Stable Diffusion\py310\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait raise WouldBlock anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\base.py", line 78, in call_next message = await recv_stream.receive() File "E:\Stable Diffusion\py310\lib\site-packages\anyio\streams\memory.py", line 114, in receive raise EndOfStream anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "E:\Stable Diffusion\py310\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi result = await app( # type: ignore[func-returns-value] File "E:\Stable Diffusion\py310\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call return await self.app(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\fastapi\applications.py", line 273, in call await super().call(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\applications.py", line 122, in call await self.middleware_stack(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\errors.py", line 184, in call raise exc File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\errors.py", line 162, in call await self.app(scope, receive, _send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\base.py", line 108, in call response = await self.dispatch_func(request, call_next) File "E:\Stable Diffusion\modules\api\api.py", line 96, in log_and_time res: Response = await call_next(req) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\base.py", line 84, in call_next raise app_exc File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\base.py", line 70, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\gzip.py", line 26, in call await self.app(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call raise exc File "E:\Stable Diffusion\py310\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call await self.app(scope, receive, sender) File "E:\Stable Diffusion\py310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call raise e File "E:\Stable Diffusion\py310\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call await self.app(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\routing.py", line 718, in call await route.handle(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\routing.py", line 276, in handle await self.app(scope, receive, send) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\routing.py", line 66, in app response = await func(request) File "E:\Stable Diffusion\py310\lib\site-packages\fastapi\routing.py", line 237, in app raw_response = await run_endpoint_function( File "E:\Stable Diffusion\py310\lib\site-packages\fastapi\routing.py", line 165, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "E:\Stable Diffusion\py310\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "E:\Stable Diffusion\py310\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\Stable Diffusion\py310\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "E:\Stable Diffusion\py310\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "E:\Stable Diffusion\modules\api\api.py", line 310, in img2imgapi processed = process_images(p) File "E:\Stable Diffusion\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "E:\Stable Diffusion\modules\processing.py", line 625, in process_images_inner uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc) File "E:\Stable Diffusion\modules\processing.py", line 570, in get_conds_with_caching cache[1] = function(shared.sd_model, required_prompts, steps) File "E:\Stable Diffusion\modules\prompt_parser.py", line 140, in get_learned_conditioning conds = model.get_learned_conditioning(texts) File "E:\Stable Diffusion\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning c = self.cond_stage_model(c) File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\modules\sd_hijack_clip.py", line 229, in forward z = self.process_tokens(tokens, multipliers) File "E:\Stable Diffusion\modules\sd_hijack_clip.py", line 254, in process_tokens z = self.encode_with_transformers(tokens) File "E:\Stable Diffusion\modules\sd_hijack_clip.py", line 302, in encode_with_transformers outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\py310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward return self.text_model( File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\py310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward encoder_outputs = self.encoder( File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\py310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward layer_outputs = encoder_layer( File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\py310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward hidden_states, attn_weights = self.self_attn( File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\py310\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward query_states = self.q_proj(hidden_states) * self.scale File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\extensions-builtin\Lora\lora.py", line 197, in lora_Linear_forward return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input)) File "E:\Stable Diffusion\extensions\a1111-sd-webui-locon\scripts\main.py", line 494, in lora_forward res = res + module.inference(x) * scale File "E:\Stable Diffusion\extensions\a1111-sd-webui-locon\scripts\main.py", line 219, in inference return self.up_model(self.down_model(x)) File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "E:\Stable Diffusion\extensions-builtin\Lora\lora.py", line 197, in lora_Linear_forward return lora_forward(self, input, torch.nn.Linear_forward_before_lora(self, input)) File "E:\Stable Diffusion\py310\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat2 in method wrapper_CUDA_mm) This is a issue I encountered today, is this my setup error?
Could you provide me with more information about the error? What action are you performing when the error occurs? Is it while using the ControlNet?