stablediffusion-infinity icon indicating copy to clipboard operation
stablediffusion-infinity copied to clipboard

[Bug] expected scalar type Half but found Float

Open P4l1ndr0m opened this issue 2 years ago • 13 comments

Describe the bug

Hello, After clicking on "Outpaint" in the screenshot below I get the following error:

image

[...] File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\transformers\models\clip\modeling_clip.py", line 642, in forward encoder_outputs = self.encoder( File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\transformers\models\clip\modeling_clip.py", line 573, in forward layer_outputs = encoder_layer( File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\transformers\models\clip\modeling_clip.py", line 316, in forward hidden_states, attn_weights = self.self_attn( File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\XXX\miniconda3\envs\sd-inf\lib\site-packages\transformers\models\clip\modeling_clip.py", line 256, in forward attn_output = torch.bmm(attn_probs, value_states) RuntimeError: expected scalar type Half but found Float

P4l1ndr0m avatar Oct 23 '22 09:10 P4l1ndr0m

Seems that diffusers' StableDiffusionInpaintPipelineLegacy has to be used with autocast. Fixed with latest version.

lkwq007 avatar Oct 23 '22 11:10 lkwq007

Hello, I am experiencing the exact same problem when I try to outpaint. I just cloned the latest version. Is it supposed to be fixed ?

Thanks in advance.

Gael-Lejeune avatar Oct 23 '22 19:10 Gael-Lejeune

Still having the same issue after cloning the latest commit (180a93b)

P4l1ndr0m avatar Oct 24 '22 02:10 P4l1ndr0m

I have the same Half Float issue with the last git code.

I use it outside conda in a custom venv with these packages (if that helps)

scipy==1.9.3 scikit-image==0.19.3 diffusers==0.6.0 transformers==4.18.0 ftfy==6.1.1 opencv-python==4.6.0.66 gradio==3.4.0 pytorch-lightning==1.7.7 einops==0.4.1 omegaconf==2.2.3 timm==0.6.7 torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

SoftologyPro avatar Oct 25 '22 07:10 SoftologyPro

It seems like it has to do with a scalar that is not converted as it should but I am not documented well enough to find the origin of the problem.

Just found this, might help. https://discuss.pytorch.org/t/autocast-error-expected-scalar-type-half-but-found-float/90659

Gael-Lejeune avatar Oct 25 '22 17:10 Gael-Lejeune

Ok so apparently this commit broke something https://github.com/lkwq007/stablediffusion-infinity/commit/6315273765c13d98d569d9b5b4d223a3a9ed261e I guess it has to do with the autocast or something. I've reverted to the previous commit state until it's fixed.

Gael-Lejeune avatar Oct 25 '22 18:10 Gael-Lejeune

Still fails here. I downloaded the latest zip, and extracted over all the existing files.

SoftologyPro avatar Oct 25 '22 18:10 SoftologyPro

I'm also having this problem trying to run infinity on runpod this morning

SpaceWarpStudio avatar Oct 27 '22 11:10 SpaceWarpStudio

+1 on runpod

grebenshyo avatar Oct 27 '22 19:10 grebenshyo

I also encountered this problem on win10, the cpu rendering is normal but too slow, finally switched to GPU rendering and ran into this problem ......

tanglangxia avatar Oct 30 '22 11:10 tanglangxia

Some GPUs might not work with fp16: python app.py --fp32 --lowvram

I used this command to solve the problem, my GPU is 1080ti, may indeed be some GPU support caused by the different circumstances

tanglangxia avatar Nov 07 '22 02:11 tanglangxia

Thanks for the solution. Adding the --fp32 parameter got it working here.

SoftologyPro avatar Nov 08 '22 04:11 SoftologyPro

confirming. no --lowvram necessary. thx

grebenshyo avatar Nov 16 '22 19:11 grebenshyo