CLEAR
CLEAR copied to clipboard
[NeurIPS 2025] Official PyTorch implementation of paper "CLEAR: Conv-Like Linearization Revs Pre-Trained Diffusion Transformers Up".
Traceback (most recent call last): [rank2]: File "/data/hanrui/CLEAR/distill.py", line 1245, in [rank2]: main(args) [rank2]: File "/data/hanrui/CLEAR/distill.py", line 1028, in main [rank2]: teacher_pred = transformer_teacher( [rank2]: File "/data/hanrui/conda/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1751, in...
Q只和对应圆形范围内的KV相乘 这个过程是一定要train吗? 如果直接infer结果会不会也有差不多的效果?
I follow the steps shown in the repo to set up the conda environment, and then execute distill.sh on A100. It raises the following error: ``` Traceback (most recent call...
Hi, the synthetic dataset-10K is very suitable for flux distillation. Would you mind sharing how to select/get the text prompts in the dataset? Thanks.
Dear authors, Thank you for open-sourcing your work. I am wondering how to exactly reproduce the 6 times speed up under 8K resolution reported in Fig 2?
Thanks to open source for this wonderful work. I have a question if this method is only supporting 1:1 resolution (e.g. 1024*1024) images for training. What do I need to...
Hi, Thanks for your great work! I have one question, from table 7, for the 8K resolution, the TFLOPS reduced from 847.73 to 3.92, but why the overall speedup is...
Hi @Huage001 , I make `inference_t2i.ipnyb` to .py file (code is exactly the same), and try to test the acceleration in flux dev, I meet the error in pipe inference...