Lora-for-Diffusers icon indicating copy to clipboard operation
Lora-for-Diffusers copied to clipboard

SafetensorError: Error while deserializing header: HeaderTooLarge

Open ShaunXZ opened this issue 2 years ago • 12 comments

Hi,

I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!

image

Below is the script that gives the above error. Env: google colab.

# load diffusers model
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float32)

# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"

bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)

# load it into UNet
# please note that diffusers' load_attn_procs only support add LoRA into attention
# if you have LoRA with other insertion, it does not support now
pipeline.unet.load_attn_procs(bin_path)

ShaunXZ avatar Mar 27 '23 13:03 ShaunXZ

Got the same issue

Pirog17000 avatar Mar 27 '23 14:03 Pirog17000

@ShaunXZ @Pirog17000 This issue may help, I also met this problem before and I solved it by re-downloaded the model (please check whether the base model and safetensor are downloaded correctly, if the filesize is too small, it should be problematic). By the way, can you share your colab link so that I can take a look for you?

haofanwang avatar Mar 27 '23 19:03 haofanwang

@haofanwang Thank you for your quick response. I double checked the downloaded safetensor file and it seems to have the right size (over 100Mb). Below is the colab used to the test this script: https://colab.research.google.com/drive/12wFobWFL_NZ64fOV0gEYXePzZMZlRpr_?usp=sharing

Thanks,

ShaunXZ avatar Mar 28 '23 02:03 ShaunXZ

my issue is resolved with updating diffusers. since I run it locally, my steps were: pip uninstall diffusers pip install git+https://github.com/huggingface/diffusers.git

and no reinstall or update flags were helpful, straight-forward uninstall-install. no more issues, works well.

Pirog17000 avatar Mar 28 '23 14:03 Pirog17000

@Pirog17000 Hi, I tried your method in colab and it still didn't work... Could you take a look at the colab link above? Thank you!

ShaunXZ avatar Mar 30 '23 01:03 ShaunXZ

+1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

tchanxx avatar Apr 28 '23 17:04 tchanxx

+1 +1 +1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

sanbuphy avatar May 11 '23 07:05 sanbuphy

According to: issue3367 pipeline.unet.load_attn_procs() takes the path where the .bin file is stored not the path to .bin file itself. Changing the input from "CheapCotton.bin" to "/PathToWhereItsStored" solved this error for me.

ksai2324 avatar Jul 02 '23 11:07 ksai2324

i meet this issue, this is i slove it ,but i dont think my way is right: error where: lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/pytorch_model.bin" pipe.unet.load_attn_procs(lora_model_path) error like yous , HeaderTooLarge then change lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) error : no file pytorch_lora_weights.bin then change cp pytorch_model.bin pytorch_lora_weights.bin the run the code lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/" pipe.unet.load_attn_procs(lora_model_path) succ! why ? why ? why ?

JaosonMa avatar Jul 12 '23 03:07 JaosonMa

same issue here, any solution yet?

FrancisDacian avatar Jul 12 '23 14:07 FrancisDacian

I think ur solution is right

  1. you should mkdir a new folder called whatever u want
  2. rename the new bin file into pytorch_lora_weights.bin
  3. put the pytorch_lora_weights.bin into the new folder u just created
  4. pipe.unet.load_attn_procs(new_file_path)

and it will work

FrancisDacian avatar Jul 12 '23 14:07 FrancisDacian

however, i met the error: File "/workspace/demo/Diffusion/models.py", line 301, in get_model return basic_unet.load_attn_procs(self.lora) File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 234, in load_attn_procs rank = value_dict["to_k_lora.down.weight"].shape[0] KeyError: 'to_k_lora.down.weight'

kkwhale7 avatar Sep 01 '23 07:09 kkwhale7