[Bug] 'WanTransformerInferAdaCaching' object has no attribute 'infer_modulation'
Description
Briefly describe the bug you encountered.
Meet Error while test AdaCache
'WanTransformerInferAdaCaching' object has no attribute 'infer_modulation'
Steps to Reproduce
run script modified by run_wan_i2v_tea.sh:
python -m lightx2v.infer
--model_cls wan2.1
--task i2v
--model_path $model_path
--config_json ${lightx2v_path}/configs/caching/adacache/wan_i2v_ada.json
--prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
--negative_prompt "镜头晃动,色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
--image_path ${lightx2v_path}/assets/inputs/imgs/img_0.jpg
--save_result_path ${lightx2v_path}/save_results/output_lightx2v_wan_i2v_ada.mp4
Expected Result
Generate Video with AdaCache speedup.
Actual Result
Get Error report.
Environment Information
Log Information
Please provide relevant error logs or debugging information.
Exception has occurred: AttributeError
'WanTransformerInferAdaCaching' object has no attribute 'infer_modulation'
File "/data0/project/LightX2V/lightx2v/models/networks/wan/infer/feature_caching/transformer_infer.py", line 366, in infer_calculating
shift_msa, scale_msa, gate_msa, c_shift_msa, c_scale_msa, c_gate_msa = self.infer_modulation(weights.blocks[block_idx].compute_phases[0], embed0)
^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/networks/wan/infer/feature_caching/transformer_infer.py", line 330, in infer
x = self.infer_calculating(weights, grid_sizes, embed, x, embed0, seq_lens, freqs, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/networks/wan/model.py", line 443, in _infer_cond_uncond
x = self.transformer_infer.infer(self.transformer_weights, grid_sizes, embed, x, embed0, seq_lens, freqs, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/utils/custom_compiler.py", line 47, in wrapper
return state["original_func"](self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/networks/wan/model.py", line 410, in infer
noise_pred_cond = self._infer_cond_uncond(inputs, infer_condition=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/runners/default_runner.py", line 155, in run_segment
self.model.infer(self.inputs)
File "/data0/project/LightX2V/lightx2v/utils/memory_profiler.py", line 18, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/runners/default_runner.py", line 291, in run_main
latents = self.run_segment(total_steps=total_steps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/utils/profiler.py", line 77, in sync_wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/models/runners/default_runner.py", line 366, in run_pipeline
gen_video_final = self.run_main()
^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/utils/profiler.py", line 77, in sync_wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data0/project/LightX2V/lightx2v/infer.py", line 109, in main
runner.run_pipeline(input_info)
File "/data0/project/LightX2V/lightx2v/infer.py", line 118, in
Additional Information
If there is any other information that can help solve the problem, please add it here.
I have debuged some bug before:
- lightx2v/models/networks/wan/model.py Line 435 you have packed five parameters in pre_infer_out,maybe the right code could be:
if self.config["feature_caching"] == "Ada":
embed = pre_infer_out.embed
grid_sizes = pre_infer_out.grid_sizes
x = pre_infer_out.x
embed0 = pre_infer_out.embed0
seq_lens = pre_infer_out.seq_lens
freqs = pre_infer_out.freqs
context = pre_infer_out.context
x = self.transformer_infer.infer(self.transformer_weights, grid_sizes, embed, x, embed0, seq_lens, freqs, context)
else:
x = self.transformer_infer.infer(self.transformer_weights, pre_infer_out)
- lightx2v/models/networks/wan/infer/feature_caching/transformer_infer.py line 324 An unexcepted variable self.infer_conditional used here. I guess you copy taylorSeer's code but forget to remove this code?