threestudio icon indicating copy to clipboard operation
threestudio copied to clipboard

Long prompt bug

Open DSaurus opened this issue 1 year ago • 0 comments

Using prompt [RAW photo, (high detailed skin:1.2), 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3] and negative prompt [(overexposed, underexposed, out of focus, deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck] will cause error.

Currently, prompt_processors cannot handle long prompts.

File` "/data/shaoruizhi/threestudio/threestudio/models/prompt_processors/stable_diffusion_prompt_processor.py", line 91, in spawn_func
    text_embeddings = text_encoder(tokens.input_ids)[0]
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 822, in forward
    return self.text_model(
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 730, in forward
    hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
    output = old_forward(*args, **kwargs)
  File "/data/shaoruizhi/anaconda3/envs/tensor4d/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 230, in forward
    embeddings = inputs_embeds + position_embeddings
RuntimeError: The size of tensor a (134) must match the size of tensor b (77) at non-singleton dimension 1

DSaurus avatar Jun 10 '23 16:06 DSaurus