stable-diffusion-webui-docker
stable-diffusion-webui-docker copied to clipboard
Error when trying to create embedding (Expected all tensors to be on the same device)
Has this issue been opened before? No
Describe the bug
I tried to create an embedding in Textual inversion menu, but after clicking on "create" it says "Error" in red, and I get an error that says "Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! "
Which UI
auto
Hardware / Software
- OS: Manjaro Linux
- OS version: Latest
- Docker Version: 1:20.10.18-1
- Docker compose version: 2.11.2-1
- Repo version: Latest release
- RAM: 16Gb
- GPU/VRAM: Nvidia Geforce GTX 1060/6Gb
Steps to Reproduce
- sudo docker compose --profile auto up --build
- Go to web interface and go to Textual inversion tab
- Write a name for an embedding
- Click create
- See error
Additional context
All the other features work fine afaik, and the GPU is working fine and fast.
Log:
webui-docker-auto-1 | Traceback (most recent call last):
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 273, in run_predict
webui-docker-auto-1 | output = await app.blocks.process_api(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 742, in process_api
webui-docker-auto-1 | result = await self.call_function(fn_index, inputs, iterator)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 653, in call_function
webui-docker-auto-1 | prediction = await anyio.to_thread.run_sync(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
webui-docker-auto-1 | return await get_asynclib().run_sync_in_worker_thread(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
webui-docker-auto-1 | return await future
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
webui-docker-auto-1 | result = context.run(func, *args)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding
webui-docker-auto-1 | filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, init_text=initialization_text)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 143, in create_embedding
webui-docker-auto-1 | embedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
webui-docker-auto-1 | return forward_call(*input, **kwargs)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
webui-docker-auto-1 | return F.embedding(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
webui-docker-auto-1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
webui-docker-auto-1 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
webui-docker-auto-1 | Loading weights [e1de58a9] from /stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-full.ckpt
webui-docker-auto-1 | Global Step: 683410
webui-docker-auto-1 | Weights loaded.
webui-docker-auto-1 | Traceback (most recent call last):
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/routes.py", line 273, in run_predict
webui-docker-auto-1 | output = await app.blocks.process_api(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 742, in process_api
webui-docker-auto-1 | result = await self.call_function(fn_index, inputs, iterator)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/gradio/blocks.py", line 653, in call_function
webui-docker-auto-1 | prediction = await anyio.to_thread.run_sync(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/to_thread.py", line 31, in run_sync
webui-docker-auto-1 | return await get_asynclib().run_sync_in_worker_thread(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
webui-docker-auto-1 | return await future
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 867, in run
webui-docker-auto-1 | result = context.run(func, *args)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding
webui-docker-auto-1 | filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, init_text=initialization_text)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 143, in create_embedding
webui-docker-auto-1 | embedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
webui-docker-auto-1 | return forward_call(*input, **kwargs)
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
webui-docker-auto-1 | return F.embedding(
webui-docker-auto-1 | File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
webui-docker-auto-1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
webui-docker-auto-1 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
@RBNXI You are using an old version, the WebUI does not use neither python3.8 nor conda.
I downloaded the latest release 2.0.1, is that wrong? Should I try the current source code instead?.
did you use git? switch to latest master
I just downloaded the source code zip from the releases page, should I try cloning the latest master then? it's only like 8 days outdated, has the bug been fixed in that time?
a lot has changed in the last 8 days.
Do you use git? it would be a lot easier for you if you just git clone the project.
I did everything from 0 but with git clone, so I'm definitely using the latest commit now, and I still get the error:
webui-docker-auto-1 | Traceback (most recent call last):
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 275, in run_predict
webui-docker-auto-1 | output = await app.blocks.process_api(
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 785, in process_api
webui-docker-auto-1 | result = await self.call_function(fn_index, inputs, iterator)
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 694, in call_function
webui-docker-auto-1 | prediction = await anyio.to_thread.run_sync(
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
webui-docker-auto-1 | return await get_asynclib().run_sync_in_worker_thread(
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
webui-docker-auto-1 | return await future
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
webui-docker-auto-1 | result = context.run(func, *args)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/ui.py", line 11, in create_embedding
webui-docker-auto-1 | filename = modules.textual_inversion.textual_inversion.create_embedding(name, nvpt, init_text=initialization_text)
webui-docker-auto-1 | File "/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py", line 160, in create_embedding
webui-docker-auto-1 | embedded = embedding_layer.token_embedding.wrapped(ids.to(devices.device)).squeeze(0)
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
webui-docker-auto-1 | return forward_call(*input, **kwargs)
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 158, in forward
webui-docker-auto-1 | return F.embedding(
webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/torch/nn/functional.py", line 2199, in embedding
webui-docker-auto-1 | return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
webui-docker-auto-1 | RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
Also it's probably unrelated, but I get this error when trying to access the new history menu too:
webui-docker-auto-1 | Traceback (most recent call last): webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/routes.py", line 275, in run_predict webui-docker-auto-1 | output = await app.blocks.process_api( webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 785, in process_api webui-docker-auto-1 | result = await self.call_function(fn_index, inputs, iterator) webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 694, in call_function webui-docker-auto-1 | prediction = await anyio.to_thread.run_sync( webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync webui-docker-auto-1 | return await get_asynclib().run_sync_in_worker_thread( webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread webui-docker-auto-1 | return await future webui-docker-auto-1 | File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run webui-docker-auto-1 | result = context.run(func, *args) webui-docker-auto-1 | File "/stable-diffusion-webui/modules/images_history.py", line 54, in page_index_change webui-docker-auto-1 | return get_recent_images(dir_name, page_index, 0, image_index, tabname) webui-docker-auto-1 | File "/stable-diffusion-webui/modules/images_history.py", line 25, in get_recent_images webui-docker-auto-1 | f_list = os.listdir(dir_name) webui-docker-auto-1 | FileNotFoundError: [Errno 2] No such file or directory: 'output/txt2img-images'
The folder output/txt2img-images is in the root of the folder created when git clone (so, the same folder where I run the docker), I don't know why it says it's not there.
@RBNXI after a while of debugging, turned out that you cannot train anything if you have the --medvram flag, which is set per default in the docker-compose.yml file, you can remove it and try again. It did not work on my machine (also 1060/6GB) because out of memory.
Oooooh, thank you so much!!, I can't even imagine how much time could it take to debug something so "somehow obvious but pretty particular. So, I can't train it with this GPU... that's sad... Could you please add this information to the readme?, I think it could be very useful for everyone, and since you made the effort of getting that information, it would be a shame to not use it.
Oh and do you have any hint on why the history doesn't work neither? And also, one question totally unrelated: what's the directory referred in "Batch img2img" tab? is it in the data folder or somehow inaccessible in the docker folder?.
Could you please add this information to the readme?, I think it could be very useful for everyone, and since you made the effort of getting that information, it would be a shame to not use it.
In that case, I would basically have to always copy/paste the entire documentation of every UI. I think its best if users look it up themselves. https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Textual-Inversion
Oh and do you have any hint on why the history doesn't work neither?
Bug! #138, I have an MR that should close it #146
And also, one question totally unrelated: what's the directory referred in "Batch img2img" tab? is it in the data folder or somehow inaccessible in the docker folder?.
It was hidden because of the flag --hide-ui-dir-config, which I have also removed in #146.
Just for info, the data and output folder are mounted into the container as /data and /output, so if you want to input/output data to the container, use these paths.
Ok, ty again!
@RBNXI you can now try the stuff from master, open the terminal and:
git pull origin master
docker compose --profile auto up --build
@RBNXI you can now try the stuff from master, open the terminal and:
git pull origin master docker compose --profile auto up --build
Ooooh yeah, now the history works, I have the input-output options in the Batch img2img, and I see some minor changes in the txt2img tab, nice!.