The size of tensor a (102) must match the size of tensor b (60) at non-singleton dimension 4
I'm trying to use CogImageEncoder Node as sample for CogVideoSampler with 5B model but I get this error - is it possible to use CogImageEncoder with 5B?
Traceback (most recent call last): File "...\Portable\ComfyUI\ComfyUI\execution.py", line 317, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\ComfyUI\execution.py", line 192, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "...\Portable\ComfyUI\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\nodes.py", line 278, in process latents = pipeline["pipe"]( ^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\pipeline_cogvideox.py", line 423, in call latents, timesteps = self.prepare_latents( ^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\ComfyUI\custom_nodes\ComfyUI-CogVideoXWrapper\pipeline_cogvideox.py", line 190, in prepare_latents latents = self.scheduler.add_noise(latents, noise, latent_timestep) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "...\Portable\ComfyUI\python_embeded\Lib\site-packages\diffusers\schedulers\scheduling_dpm_cogvideox.py", line 465, in add_noise noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ RuntimeError: The size of tensor a (102) must match the size of tensor b (60) at non-singleton dimension 4
Yes it is, but you'll have to resize the input before the decode node first, CogVideoX is pretty demanding on the resolution used.
Where can I do this? The thing is that the error is given by CogVideoSampler at the beginning of its work and only when CogImageEncoder is connected. I have tried changing the image size, but it does not help.
Where can I do this?Where can I do this? 我在哪里可以执行此操作? The thing is that the error is given by CogVideoSampler at the beginning of its work and only when CogImageEncoder is connected. The thing is that the error is given by CogVideoSampler at the beginning of its work and only when CogImageEncoder is connected.问题是 CogVideoSampler 在其工作开始时给出错误,并且仅在连接 CogImageEncoder 时给出。 I have tried changing the image size, but it does not help. I have tried changing the image size, but it does not help.我尝试过更改图像大小,但没有帮助。
You need to copy the batches of images, which are the same as the number of frames of the generated video.
You need to copy the batches of images, which are the same as the number of frames of the generated video.
Not help
You need to copy the batches of images, which are the same as the number of frames of the generated video.
Not help
![]()
Have to resize the image too.
Good with 4th dimension - now 60 :)
but now
That's because VAE scaling factor is 8, so at minimum the resolution needs to be evenly divisible by 8, which 860 is not. Also CogVideoX really doesn't like many resolutions, I don't think portrait aspect will even work.
Work! Thanks a lot :)
Just set the same size. I thought about doing this before, but if you change any size (any resize image node) after the first start, an error pops up -
and you can only get rid of it by clearing the cache. this error confused me...