[Bug]: RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same (On 1660ti)
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I'm using a 6GB GTX 1660ti (a turing GPU with no tensor cores) with the argument --precision full --no-half -medvram and I'm getting this error when I try to use the SD 2.0 model.
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
Steps to reproduce the problem
Use Automatic1111 with a 1660ti Use the arguments --precision full --no-half -medvram
Add 768-v-ema.yaml to the models/stable-diffusion folder Add 768-v-ema.ckpt to the models/stable-diffusion folder Select the 768-v-ema.ckpt model
Enter a prompt
What should have happened?
It should have produced an image.
Commit where the problem happens
b5050ad2071644f7b4c99660dc66a8a95136102f
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--precision full --no-half -medvram
Additional information, context and logs
No response
I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as (bad_prompt:0.8) in Negative Prompts seems like the culprit, removing it fixed the error completely.
May be the same for all embeddings.
I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as
(bad_prompt:0.8)in Negative Prompts seems like the culprit, removing it fixed the error completely.May be the same for all embeddings.
I don't have any embeddings installed and I got this error without using any negative prompts. I don't know if our issues are exactly the same. You might want to keep your issue open.
I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as
(bad_prompt:0.8)in Negative Prompts seems like the culprit, removing it fixed the error completely.May be the same for all embeddings.
That was my problem as well. Solved by removing any reference to embeddings in my prompt.
Well, mine had something to do with the arguments I use when I run the program. I had to use --precision full --no-half because I am using a 1660ti.
However, I just found this reddit thread that has a workaround so you can run Automatic1111 on a 1660ti without using that argument. So I tried it and now the 768 model is working, but that's a bandaid since it will revert back anytime I try to update.
You have to add this code to modules/devices.py
torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True
def enable_tf32():
if torch.cuda.is_available():
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True
errors.run``(enable_tf32, "Enabling TF32")
revert back anytime I try to update.
The reason why you have to change each time is that you have the git pull command in your startup sequence. This tells git to download the latest official master branch.
You can actually change the active branch to another one, and this way git will continue to update that branch (the one with the fix) instead of replacing it with the master branch - the official version that was released before the fix.
The command to change the branch is git checkout followed by the name of the branch. You must have installed the proper branch prior to checkout. You can also see which branches you have installed already by typing git branch.
The reason why you have to change each time is that you have the
git pullcommand in your startup sequence. This tells git to download the latest official master branch. You can actually change the active branch to another one, and this way git will continue to update that branch (the one with the fix) instead of replacing it with themasterbranch - the official version that was released before the fix. The command to change the branch isgit checkoutfollowed by the name of the branch. You must have installed the proper branch prior to checkout. You can also see which branches you have installed already by typinggit branch.
I do not have git pull in my startup sequence.
Yeah, they just updated devices.py again. So I had to redo it. It's something that needs to be implemented as a menu option for people using a 1660ti.
@slymeasy's fix didn't work for me, but I found another one-line solution: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5113#issuecomment-1347267434