sd-webui-text2video icon indicating copy to clipboard operation
sd-webui-text2video copied to clipboard

[Bug]: When I try to run ModelScope text2video it does nothing after pressing 'Generate'.

Open rookiemann opened this issue 10 months ago • 27 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Are you using the latest version of the extension?

  • [X] I have the modelscope text2video extension updated to the lastest version and I still have the issue.

What happened?

When I try to generate a video from text, nothing at all happens. The button is activated, yet on the console of the webui it does nothing as if nothing was pressed.

This is a fresh install of all extensions, and I had to jump through a few hoops to even get it to start up properly. First issue I had was this one - https://github.com/kabachuha/sd-webui-text2video/issues/239

Which I corrected using the fix there.

I then encountered another issue as I do have animatediff extension installed here - https://github.com/continue-revolution/sd-webui-animatediff/issues/118

I fixed the issue using the fix there

Now everything starts fine, as far as the webui console gives no errors at all. Just when I press Generate: image

It says it's running but on webui console:

image

It doesn't run at all.

Steps to reproduce the problem

  1. Go to text2video
  2. Press generate
  3. Expect generation

What should have happened?

Text2video should have started to generate

WebUI and Deforum extension Commit IDs

webui commit id - bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 txt2vid commit id - 989f5cfec8ca437eb27c30fc7a10b2864f159bc9

Torch version

absl-py 2.1.0 accelerate 0.21.0 aenum 3.1.15 aiofiles 23.2.1 aiohttp 3.9.3 aiosignal 1.3.1 albumentations 1.4.1 altair 5.2.0 antlr4-python3-runtime 4.9.3 anyio 3.7.1 async-timeout 4.0.3 attrs 23.2.0 av 11.0.0 beautifulsoup4 4.12.3 blendmodes 2022 certifi 2024.2.2 cffi 1.16.0 chardet 5.2.0 charset-normalizer 3.3.2 clean-fid 0.1.35 click 8.1.7 clip 1.0 colorama 0.4.6 coloredlogs 15.0.1 colorlog 6.8.2 contourpy 1.2.0 cssselect2 0.7.0 cycler 0.12.1 Cython 3.0.9 decorator 4.0.11 deprecation 2.1.0 depth_anything 2024.1.22.0 easydict 1.13 einops 0.4.1 embreex 2.17.7.post4 exceptiongroup 1.2.0 facexlib 0.3.0 fake-useragent 1.5.1 fastapi 0.94.0 ffmpy 0.3.2 filelock 3.13.1 filterpy 1.4.5 flatbuffers 24.3.7 fonttools 4.50.0 frozenlist 1.4.1 fsspec 2024.3.0 ftfy 6.2.0 fvcore 0.1.5.post20221221 gdown 5.1.0 gitdb 4.0.11 GitPython 3.1.32 gradio 3.41.2 gradio_client 0.5.0 h11 0.12.0 handrefinerportable 2024.2.12.0 httpcore 0.15.0 httpx 0.24.1 huggingface-hub 0.21.4 humanfriendly 10.0 idna 3.6 imageio 2.34.0 imageio-ffmpeg 0.4.9 importlib-resources 5.12.0 inflection 0.5.1 insightface 0.7.3 iopath 0.1.9 jax 0.4.25 Jinja2 3.1.3 joblib 1.3.2 jsonmerge 1.8.0 jsonschema 4.21.1 jsonschema-specifications 2023.12.1 kiwisolver 1.4.5 kornia 0.6.7 lark 1.1.2 lazy_loader 0.3 lightning-utilities 0.10.1 llvmlite 0.42.0 lxml 5.1.0 mapbox-earcut 1.0.1 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.8.3 mdurl 0.1.2 mediapipe 0.10.11 ml-dtypes 0.3.2 moviepy 0.2.3.2 mpmath 1.3.0 multidict 6.0.5 mutagen 1.47.0 natsort 8.4.0 networkx 3.2.1 numba 0.59.0 numexpr 2.9.0 numpy 1.26.2 omegaconf 2.2.3 onnx 1.15.0 onnxruntime 1.17.1 open-clip-torch 2.20.0 opencv-contrib-python 4.9.0.80 opencv-python 4.9.0.80 opencv-python-headless 4.9.0.80 OpenPIV 0.25.2 opt-einsum 3.3.0 orjson 3.9.15 packaging 24.0 pandas 2.2.1 piexif 1.1.3 Pillow 9.5.0 PIMS 0.6.0 pip 24.0 portalocker 2.8.2 prettytable 3.10.0 protobuf 3.20.3 psutil 5.9.5 pycollada 0.8 pycparser 2.21 pydantic 1.10.14 pydub 0.25.1 Pygments 2.17.2 pyparsing 3.1.2 pyreadline3 3.4.1 PySocks 1.7.1 python-dateutil 2.9.0.post0 python-multipart 0.0.9 pytorch-lightning 1.9.4 pytz 2024.1 PyWavelets 1.5.0 pywin32 306 PyYAML 6.0.1 referencing 0.34.0 regex 2023.12.25 reportlab 4.1.0 requests 2.31.0 resize-right 0.0.2 rich 13.7.1 rpds-py 0.18.0 Rtree 1.2.0 safetensors 0.4.2 scikit-image 0.21.0 scikit-learn 1.4.1.post1 scipy 1.12.0 semantic-version 2.10.0 Send2Trash 1.8.2 sentencepiece 0.2.0 setuptools 63.2.0 shapely 2.0.3 six 1.16.0 slicerator 1.1.0 smmap 5.0.1 sniffio 1.3.1 sounddevice 0.4.6 soupsieve 2.5 spandrel 0.1.6 starlette 0.26.1 svg.path 6.3 svglib 1.5.1 sympy 1.12 tabulate 0.9.0 termcolor 2.4.0 threadpoolctl 3.3.0 tifffile 2024.2.12 timm 0.9.16 tinycss2 1.2.1 tokenizers 0.13.3 tomesd 0.1.3 toolz 0.12.1 torch 2.1.2+cu121 torchdiffeq 0.2.3 torchmetrics 1.3.2 torchsde 0.2.6 torchvision 0.16.2+cu121 tqdm 4.66.2 trampoline 0.1.2 transformers 4.30.2 trimesh 4.2.0 typing_extensions 4.10.0 tzdata 2024.1 urllib3 2.2.1 uvicorn 0.28.0 vhacdx 0.0.6 wcwidth 0.2.13 webencodings 0.5.1 websockets 11.0.3 xatlas 0.0.9 xformers 0.0.23.post1 xxhash 3.4.1 yacs 0.1.8 yarl 1.9.4 ZipUnicode 1.1.1

What GPU were you using for launching?

RTX 2080 Super 8 vram

On which platform are you launching the webui backend with the extension?

Local PC setup (Windows)

Settings

image This screenshot

Console logs

venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments: --xformers --medvram --api
CivitAI Browser+: Aria2 RPC started
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-03-28 07:47:11,206 - ControlNet - INFO - ControlNet v1.1.441
2024-03-28 07:47:11,445 - ControlNet - INFO - ControlNet v1.1.441
Loading weights [6ce0161689] from D:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 3.2s (load weights from disk: 0.1s, create model: 0.5s, apply weights to model: 1.8s, apply half(): 0.4s, calculate empty prompt: 0.4s).
2024-03-28 07:47:15,763 - ControlNet - INFO - ControlNet UI callback registered.
*Deforum ControlNet support: enabled*
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 35.5s (prepare environment: 8.1s, import torch: 9.8s, import gradio: 2.1s, setup paths: 2.3s, initialize shared: 0.5s, other imports: 1.2s, load scripts: 4.9s, create ui: 5.0s, gradio launch: 0.6s, add APIs: 1.0s).

Additional information

This is my submission.

rookiemann avatar Mar 28 '24 15:03 rookiemann

Dooray! 메일 발송 실패 안내

메일 발송 실패 안내

@.***) 님께 보낸 메일이 전송되지 못하였습니다.

      실패 사유를 확인해보세요.
    




  
    
      
        * 받는 사람 : 

@.***)

        * 발송 시간 : 

2024-03-29T00:09:40

        * 메일 제목 : 

[kabachuha/sd-webui-text2video] [Bug]: When I try to run ModelScope text2video it does nothing after pressing 'Generate'. (Issue #243)

            * 실패 사유 : 
          
          
            

대외비/기밀 메일은 외부 계정으로 발송이 제한되어 일부 메일 발송 실패(수신자가 전달 설정한 경우 포함)

      이 메일은 발신전용으로 회신되지 않습니다.
      더 궁금하신 사항은
      ***@***.***
      으로 문의해 주시기 바랍니다.
    




    © Dooray!.

nemodleo avatar Mar 28 '24 15:03 nemodleo

I'm having the same issue. I can see my ModelScope and Zeroscope v2 XL models in the Model dropdown within the txt2video tab.

But when I click on "Generate", no generation begins. No progress in the Stable Diffusion command prompt. It's like it's not making the connection. Just like you, @rookiemann, the orange "Generate" button switches to the two gray buttons "Interrupt" and "Skip".

One thought, I have Stable Diffusion installed on a different drive (d:/) - not my system drive (win11, c:/). @rookiemann - Any chance your SD installation is also on a different drive? I wonder if it's not reading something.

fmnt-whoa avatar Mar 30 '24 19:03 fmnt-whoa

Same problem here... I did no less than 5 fresh new install but no output at all :D

diegoparma avatar Apr 01 '24 01:04 diegoparma

I'm having the same issue. I can see my ModelScope and Zeroscope v2 XL models in the Model dropdown within the txt2video tab.

But when I click on "Generate", no generation begins. No progress in the Stable Diffusion command prompt. It's like it's not making the connection. Just like you, @rookiemann, the orange "Generate" button switches to the two gray buttons "Interrupt" and "Skip".

One thought, I have Stable Diffusion installed on a different drive (d:/) - not my system drive (win11, c:/). @rookiemann - Any chance your SD installation is also on a different drive? I wonder if it's not reading something.

Yes I have my whole installation on an external D drive, but that was never a problem before.

rookiemann avatar Apr 06 '24 04:04 rookiemann

Yes I have my whole installation on an external D drive, but that was never a problem before.

Copy. Was text2vid working in a previous iteration for you?

fmnt-whoa avatar Apr 06 '24 05:04 fmnt-whoa

Yes it was working on a older installation of automatic1111, I can't remember which. I was away from using the repo for a long time and decided to update it and all my extensions. After that it wasn't working. I then did entire new installs of both automatic1111 and latest text2video extension just to be sure and the problem persisted. I never tried installing on C:, I just don't have the room.

rookiemann avatar Apr 06 '24 06:04 rookiemann

Legacy versions may be the answer then in the interim. I only recently installed SD and t2v so latest versions for me as of a couple weeks ago.

But the fact that it doesn't actually initiate anything in the command prompt makes me think it's gotta be in the launch process, otherwise we'd have some indication or failure shown.

fmnt-whoa avatar Apr 06 '24 06:04 fmnt-whoa

Legacy versions may be the answer then in the interim. I only recently installed SD and t2v so latest versions for me as of a couple weeks ago.

But the fact that it doesn't actually initiate anything in the command prompt makes me think it's gotta be in the launch process, otherwise we'd have some indication or failure shown.

I think I want to make a separate install of older repos because I really like this extension and want to use older version. Did you try that? Can you suggest which older versions and where to get them to make the separate install to use this?

rookiemann avatar Apr 08 '24 03:04 rookiemann

I have exactly same problem... took me a day to figure out but still got no solution ...

ninii3 avatar Apr 09 '24 13:04 ninii3

I have exactly same problem... took me a day to figure out but still got no solution ...

Do you have AnimateDiff extension as well? I'm wondering if there is a connection there.

rookiemann avatar Apr 09 '24 14:04 rookiemann

I have exactly same problem... took me a day to figure out but still got no solution ...

@ninii3 - Can you share your Command Prompt output as well? Does Text2Video even attempt to initiate?

Do you have AnimateDiff extension as well? I'm wondering if there is a connection there.

@rookiemann - I know that question is not for me, but I did install a few plugins prior text2video. I don't remember which. But you did say you did a clean install of Stable Diffusion and text2video and still had the issue, right? Was AnimateDiff a part of the clean install?

I think I want to make a separate install of older repos because I really like this extension and want to use older version. Did you try that? Can you suggest which older versions and where to get them to make the separate install to use this?

@rookiemann - I've only ever installed the most recent version of it, and it didn't work. I'd need to do the same as you and just start installing them one-by-one, working backwards, to see if we get a working one. I haven't had time to attempt that yet though.

fmnt-whoa avatar Apr 09 '24 16:04 fmnt-whoa

I have exactly same problem... took me a day to figure out but still got no solution ...

@ninii3 - Can you share your Command Prompt output as well? Does Text2Video even attempt to initiate?

Do you have AnimateDiff extension as well? I'm wondering if there is a connection there.

@rookiemann - I know that question is not for me, but I did install a few plugins prior text2video. I don't remember which. But you did say you did a clean install of Stable Diffusion and text2video and still had the issue, right? Was AnimateDiff a part of the clean install?

I think I want to make a separate install of older repos because I really like this extension and want to use older version. Did you try that? Can you suggest which older versions and where to get them to make the separate install to use this?

@rookiemann - I've only ever installed the most recent version of it, and it didn't work. I'd need to do the same as you and just start installing them one-by-one, working backwards, to see if we get a working one. I haven't had time to attempt that yet though.

nothing happened at all and no error in my terminal as well.

ninii3 avatar Apr 09 '24 21:04 ninii3

I have exactly same problem... took me a day to figure out but still got no solution ...我也遇到了完全相同的问题...

@ninii3 - Can you share your Command Prompt output as well? Does Text2Video even attempt to initiate?- 你能分

Do you have AnimateDiff extension as well? I'm wondering if there is a connection there.你也安装了 AnimateDiff

@rookiemann - I know that question is not for me, but I did install a few plugins prior text2video. I don't remember which. But you did say you did a clean install of Stable Diffusion and text2video and still had the issue, right? Was AnimateDiff a part of the clean install?-

I think I want to make a separate install of older repos because I really like this extension and want to use older version. Did you try that? Can you suggest which older versions and where to get them to make the separate install to use this?

@rookiemann - I've only ever installed the most recent version of it, and it didn't work. I'd need to do the same as you and just start installing them one-by-one, working backwards, to see if we get a working one. I haven't had time to attempt that yet though.

do you know how to install the past version ?

ninii3 avatar Apr 10 '24 14:04 ninii3

Same problem I still haven't found a solution, someone has?

BigShaka77 avatar Apr 15 '24 16:04 BigShaka77

I really want to use this. I did a whole new fresh install of automatic1111 and this extension direct on my C:\stable-diffusion-webui with no other extensions at all, just the 2 and I still have this problem.

Where can we get older repos copies?

rookiemann avatar Apr 15 '24 18:04 rookiemann

Could anyone solve it? I have tried all the possible solutions I have found and nothing solves it.

BigShaka77 avatar Apr 22 '24 16:04 BigShaka77

Could anyone solve it? I have tried all the possible solutions I have found and nothing solves it.

I decided to try out Comfy UI for the first time just to try the ComfyUI_ModelScopeT2V repo that's built from this repo and I got it working.

I never used comfy UI before and now I wished I had gotten into it much sooner, it's pretty good

rookiemann avatar Apr 22 '24 21:04 rookiemann

I have exactly same problem... took me a day to figure out but still got no solution ...

Do you have AnimateDiff extension as well? I'm wondering if there is a connection there.

I do not and having the same issue.

sion42x avatar Apr 23 '24 20:04 sion42x

So, I ran into this issue today. Apologies as I didn't copy the exact error and have automatic1111 shut down right to to run something else on my GPU, but I did find out that this is a client side Javascript error. The server is not doing anything because it is not receiving a request. If you look in your browser's console, you will see a JS error relating to Gradio. It's trying to run a document.getElementById and failing, then JS stops executing and the request never gets sent to the server.

gumaerc avatar May 10 '24 16:05 gumaerc

a work around for now is to go into console and paste

function setSubmitButtonsVisibility(tabname, showInterrupt, showSkip, showInterrupting) {
    gradioApp().getElementById(tabname + '_interrupt').style.display = showInterrupt ? "block" : "none";
    gradioApp().getElementById(tabname + '_skip').style.display = showSkip ? "block" : "none";
}

phyzical avatar May 23 '24 13:05 phyzical

I changed all the ModelScopes in the code by renaming the ModelScope folder in text2Video because it conflicted with a separately installed ModelScope. but Processing does not start even if I press the Generate button.

“ModelScope” is displayed in the DropDown List, is it correct? It seems like the Model is not being loaded correctly. Can someone please show me a screenshot of the correct state? P.S. Model loading seems to have been resolved.

>physical Which file and where do I add it?

Enchante503 avatar May 24 '24 04:05 Enchante503

open the dev tools in your browser before pressing generate

paste the above snippet, that will let the button work until you refresh.

Model scope didnt work for me though

But the videocrafter did

phyzical avatar May 24 '24 07:05 phyzical

physical Thank you !!

function setSubmitButtonsVisibility(tabname, showInterrupt, showSkip, showInterrupting) {
    gradioApp().getElementById(tabname + '_interrupt').style.display = showInterrupt ? "block" : "none";
    gradioApp().getElementById(tabname + '_skip').style.display = showSkip ? "block" : "none";
}

his code, /extensions/sd-webui-text2video/javascript/t2v_progressbar.js After adding this, it started working!

Other errors were also displayed I successfully solved the problems one by one and the video is now generated!

Thanks again to physical!

Enchante503 avatar May 24 '24 13:05 Enchante503

open the dev tools in your browser before pressing generate

paste the above snippet, that will let the button work until you refresh.

Model scope didnt work for me though

But the videocrafter did

Captura de pantalla 2024-05-24 092341

Thanks for your code, I was finally able to make a lot of progress with this problem, I put the modelscope as shown in this image and it recognizes it, but when I hit generate it shows me this, and I don't know how to solve it, any ideas?

Captura de pantalla 2024-05-24 092643

BigShaka77 avatar May 24 '24 15:05 BigShaka77

Videocrafter stays generating forever and nothing happens. Captura de pantalla 2024-05-24 093126

BigShaka77 avatar May 24 '24 15:05 BigShaka77

There should be an error displayed on the console (terminal), so you need to deal with it. No one can help you unless you tell them what the error is.

Enchante503 avatar May 25 '24 08:05 Enchante503

When I press generate, the below message appears. The GPU has RAM allocated to it but no activity on the GPU or CPU.

Startup time: 25.5s (prepare environment: 6.8s, import torch: 5.4s, import gradio: 1.4s, setup paths: 4.8s, initialize shared: 0.2s, other imports: 0.8s, load scripts: 4.2s, create ui: 1.1s, gradio launch: 0.8s).
1001
1001
1001
1001
1001
Applying attention optimization: Doggettx... done.
Model loaded in 11.4s (load weights from disk: 1.3s, create model: 4.0s, apply weights to model: 4.2s, move model to device: 0.2s, calculate empty prompt: 1.6s).

niknah avatar Jun 17 '24 00:06 niknah

@phyzical 's fix (https://github.com/kabachuha/sd-webui-text2video/issues/243#issuecomment-2127081550) worked for me.

Videocrafter seems to work (generates video).

ModelScope returns the same result as @BigShaka77 .

Console info:

text2video — The model selected is: <modelscope> (ModelScope-like)
 text2video extension for auto1111 webui
Git commit: 989f5cfe
Starting text2video
Pipeline setup
config namespace(framework='pytorch', task='text-to-video-synthesis', model={'type': 'latent-text-to-video-synthesis', 'model_args': {'ckpt_clip': 'open_clip_pytorch_model.bin', 'ckpt_unet': 'text2video_pytorch_model.pth', 'ckpt_autoencoder': 'VQGAN_autoencoder.pth', 'max_frames': 16, 'tiny_gpu': 1}, 'model_cfg': {'unet_in_dim': 4, 'unet_dim': 320, 'unet_y_dim': 768, 'unet_context_dim': 1024, 'unet_out_dim': 4, 'unet_dim_mult': [1, 2, 4, 4], 'unet_num_heads': 8, 'unet_head_dim': 64, 'unet_res_blocks': 2, 'unet_attn_scales': [1, 0.5, 0.25], 'unet_dropout': 0.1, 'temporal_attention': 'True', 'num_timesteps': 1000, 'mean_type': 'eps', 'var_type': 'fixed_small', 'loss_type': 'mse'}}, pipeline={'type': 'latent-text-to-video-synthesis'})
device cuda
Working in txt2vid mode
  0%|                                                                                             | 0/1 [00:00<?, ?it/s]Making a video with the following parameters:
{'prompt': 'a girl with pink hair walking down the street, wearing a long trenchcoat, a rainy neon-lit futuristic city in the background', 'n_prompt': 'text, watermark, copyright, blurry, nsfw', 'steps': 30, 'frames': 24, 'seed': 1926582080, 'scale': 17, 'width': 256, 'height': 256, 'eta': 0.0, 'cpu_vae': 'GPU (half precision)', 'device': device(type='cuda'), 'skip_steps': 0, 'strength': 1, 'is_vid2vid': 0, 'sampler': 'DDIM_Gaussian'}
Traceback (most recent call last):
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/t2v_helpers/render.py", line 30, in run
    vids_pack = process_modelscope(args_dict, args)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/process_modelscope.py", line 221, in process_modelscope
    samples, _, infotext = pipe.infer(args.prompt, args.n_prompt, args.steps, args.frames, args.seed + batch if args.seed != -1 else -1, args.cfg_scale,
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 252, in infer
    c, uc = self.preprocess(prompt, n_prompt, steps)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 406, in preprocess
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, self.clip_encoder, [n_prompt], steps, cached_uc)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/t2v_pipeline.py", line 399, in get_conds_with_caching
    cache[1] = function(model, required_prompts, steps)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/modules/prompt_parser.py", line 188, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/clip_hardcode.py", line 280, in get_learned_conditioning
    return self.encode(text)
           ^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/clip_hardcode.py", line 277, in encode
    return self(text)
           ^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/venv.bluebox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/venv.bluebox/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/clip_hardcode.py", line 371, in forward
    batch_chunks, token_count = self.process_texts(texts)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/clip_hardcode.py", line 255, in process_texts
    chunks, current_token_count = self.tokenize_line(line)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/extensions/sd-webui-text2video/scripts/modelscope/clip_hardcode.py", line 153, in tokenize_line
    if opts.enable_emphasis:
       ^^^^^^^^^^^^^^^^^^^^
  File "/home/matt/storage1/ai/stable-diffusion-webui/modules/options.py", line 142, in __getattr__
    return super(Options, self).__getattribute__(item)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'Options' object has no attribute 'enable_emphasis'
Exception occurred: 'Options' object has no attribute 'enable_emphasis'

If I have a chance, I'll dig around in the code and see what I can find.

mattcaron avatar Oct 14 '24 22:10 mattcaron

Fix for above:

In file scripts/modelscope/clip_hardcode.py, line 153, change it to look like this:

        if hasattr(opts, 'enable_emphasis') and opts.enable_emphasis:

Then, if enable_emphasis isn't set, it won't dereference it (or whatever Python calls it) and crash.

Hopefully, Mojo will fix such silliness.

mattcaron avatar Oct 15 '24 00:10 mattcaron

PR is up. https://github.com/kabachuha/sd-webui-text2video/pull/248

mattcaron avatar Oct 15 '24 01:10 mattcaron