运行gradio app报错
Traceback (most recent call last): File "D:\conda\envs\zenctrl\lib\site-packages\gradio\queueing.py", line 625, in process_events response = await route_utils.call_process_api( File "D:\conda\envs\zenctrl\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api output = await app.get_blocks().process_api( File "D:\conda\envs\zenctrl\lib\site-packages\gradio\blocks.py", line 2146, in process_api result = await self.call_function( File "D:\conda\envs\zenctrl\lib\site-packages\gradio\blocks.py", line 1664, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "D:\conda\envs\zenctrl\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "D:\conda\envs\zenctrl\lib\site-packages\anyio_backends_asyncio.py", line 2470, in run_sync_in_worker_thread return await future File "D:\conda\envs\zenctrl\lib\site-packages\anyio_backends_asyncio.py", line 967, in run result = context.run(func, *args) File "D:\conda\envs\zenctrl\lib\site-packages\gradio\utils.py", line 884, in wrapper response = f(*args, **kwargs) File "E:\ZenCtrl\app.py", line 256, in _run pipe = get_pipeline() File "E:\ZenCtrl\app.py", line 156, in get_pipeline init_pipeline() # safe here – this fn is @spaces.GPU wrapped File "E:\ZenCtrl\app.py", line 29, in init_pipeline transformer_model = FluxTransformer2DModel.from_pretrained( File "D:\conda\envs\zenctrl\lib\site-packages\huggingface_hub\utils_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "D:\conda\envs\zenctrl\lib\site-packages\diffusers\models\modeling_utils.py", line 886, in from_pretrained accelerate.load_checkpoint_and_dispatch( File "D:\conda\envs\zenctrl\lib\site-packages\accelerate\big_modeling.py", line 617, in load_checkpoint_and_dispatch load_checkpoint_in_model( File "D:\conda\envs\zenctrl\lib\site-packages\accelerate\utils\modeling.py", line 1915, in load_checkpoint_in_model loaded_checkpoint = load_state_dict(checkpoint_file, device_map=device_map) File "D:\conda\envs\zenctrl\lib\site-packages\accelerate\utils\modeling.py", line 1707, in load_state_dict return torch.load(checkpoint_file, map_location=torch.device("cpu")) File "D:\conda\envs\zenctrl\lib\site-packages\torch\serialization.py", line 1524, in load raise pickle.UnpicklingError(_get_wo_message(str(e))) from None _pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. (1) In PyTorch 2.6, we changed the default value of the weights_only argument in torch.load from False to True. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with weights_only=True please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor was not an allowed global by default. Please use torch.serialization.add_safe_globals([torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor]) or the torch.serialization.safe_globals([torchao.dtypes.affine_quantized_tensor.AffineQuantizedTensor]) context manager to allowlist this global if you trust this class/function.
Hello , could you look at this patch in the meantime : https://github.com/FotographerAI/ZenCtrl/issues/11#issue-3057249109
Hello , could you look at this patch in the meantime : #11 (comment)
添加了补丁依然报错 AttributeError: Can't get attribute 'PlainAQTLayout' on <module 'torchao.dtypes.affine_quantized_tensor' from 'D:\conda\envs\zenctrl\lib\site-packages\torchao\dtypes\affine_quantized_tensor.py'>
torchao版本是0.11.0
sorry for the delay , we are coming back to this by tomorrow with the torchao fixes (it's the main culprit here)
sorry for the delay , we are coming back to this by tomorrow with the torchao fixes (it's the main culprit here)
Thank you!
Yeah I cannot get past the PlainAQTLayout error either.
sorry for the delay , we are coming back to this by tomorrow with the torchao fixes (it's the main culprit here)
两天过去了,还没修复吗?