Some errors and solutions (May, 2025)
cuda version: 12.8 system: Ubuntu 20.04.6 LTS
-
Problem: when I try to build flash-attention by "pip install . --no-build-isolation".
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for flash-attn Running setup.py clean for flash-attn Failed to build flash-attn ERROR: Failed to build installable wheels for some pyproject.toml based projects (flash-attn)The latest version of flash-attention (now) is v2.7.4.post1, which drops Pytorch 2.1. This is conflicted with the required version of pytorch. Solution: run "pip install flash-attention==2.7.3 --no-build-isolation" (no 2.7.4 for some reasons). -
Problem: When I prepare InstructDiffusion checkpoint, get an error to convert files
ImportError: cannot import name 'cached_download' from 'huggingface_hub'The lastet huggingface_hub package doesn't support this function. Solution: try to install a low version of huggingface_hub. Here I solved by "pip install huggingface_hub==0.24.0"
If I have more problems and solved them, will post them here.
- Problem: no LLaVa v1.1 Solution: https://huggingface.co/liuhaotian/LLaVA-13b-delta-v1-1
- pip install flash-attention==2.7.3 --no-build-isolation
run pip install flash-attention==2.7.3 --no-build-isolation but "Could not find a version that satisfies the requirement flash-attention==2.7.3 (from versions: 1.0.0)"
@fwbc I would suggest to check the versions that you have. Then you can choose the highest.
Traceback (most recent call last): File "/data2/zg5/image_edit/smart_edit/SmartEdit/test/DS_SmartEdit_test.py", line 169, in <module> from model.DS_SmartEdit_model import SmartEdit_model ModuleNotFoundError: No module named 'model'
Add this to test file
import sys import os sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
no module basicsr
pip install basicsr
cuda version: 12.8 system: Ubuntu 20.04.6 LTS
- Problem: when I try to build flash-attention by "pip install . --no-build-isolation".
note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for flash-attn Running setup.py clean for flash-attn Failed to build flash-attn ERROR: Failed to build installable wheels for some pyproject.toml based projects (flash-attn)The latest version of flash-attention (now) is v2.7.4.post1, which drops Pytorch 2.1. This is conflicted with the required version of pytorch. Solution: run "pip install flash-attention==2.7.3 --no-build-isolation" (no 2.7.4 for some reasons).- Problem: When I prepare InstructDiffusion checkpoint, get an error to convert files
ImportError: cannot import name 'cached_download' from 'huggingface_hub'The lastet huggingface_hub package doesn't support this function. Solution: try to install a low version of huggingface_hub. Here I solved by "pip install huggingface_hub==0.24.0"If I have more problems and solved them, will post them here.
python test/DS_SmartEdit_test.py --is_understanding_scenes True --model_name_or_path "./checkpoints/vicuna-7b-v1-1" --LLaVA_model_path "./checkpoints/LLaVA-7B-v1" --save_dir './checkpoints/SmartEdit-7B/Understand-15000' --steps 15000 --total_dir "./checkpoints/SmartEdit-7B" --sd_qformer_version "v1.1-7b" --resize_resolution 256 python test/DS_SmartEdit_test.py --is_reasoning_scenes True --model_name_or_path "./checkpoints/vicuna-7b-v1-1" --LLaVA_model_path "./checkpoints/LLaVA-7B-v1" --save_dir './checkpoints/SmartEdit-7B/Reason-15000' --steps 15000 --total_dir "./checkpoints/SmartEdit-7B" --sd_qformer_version "v1.1-7b" --resize_resolution 256 python test/DS_SmartEdit_test.py --is_understanding_scenes True --model_name_or_path "./checkpoints/vicuna-13b-v1-1" --LLaVA_model_path "./checkpoints/LLaVA-13B-v1" --save_dir './checkpoints/SmartEdit-13B/Understand-15000' --steps 15000 --total_dir "./checkpoints/SmartEdit-13B" --sd_qformer_version "v1.1-13b" --resize_resolution 256 python test/DS_SmartEdit_test.py --is_reasoning_scenes True --model_name_or_path "./checkpoints/vicuna-13b-v1-1" --LLaVA_model_path "./checkpoints/LLaVA-13B-v1" --save_dir './checkpoints/SmartEdit-13B/Reason-15000' --steps 15000 --total_dir "./checkpoints/SmartEdit-13B" --sd_qformer_version "v1.1-13b" --resize_resolution 256 do you know where are these models?I did not find like "vicuna-7b-v1-1" on their hugging face.And because their huggingface did not provide additional user guides and notebooks,I do not know how to use their model.
you can use the release version. Just do the command "pip install xxx.whl" and hope it'll help you!