unsloth icon indicating copy to clipboard operation
unsloth copied to clipboard

Models not pushing to specified username (organisation)

Open RonanKMcGovern opened this issue 1 year ago • 1 comments

Running:

hf_username="Trelis"
new_model_name="Meta-Llama-3-8B-Instruct-Gaeilge"
if True: model.push_to_hub_merged(f"{hf_username}/{new_model_name}", tokenizer, save_method = "merged_16bit")

still leads to pushing the model to the username associated with the hf token (RonanMcGovern in my case), and not the hf_username I have specified (Trelis, the org).

Full logs:

Unsloth: Merging 4bit and LoRA weights to 16bit...
Unsloth: Will use up to 1390.17 out of 2003.87 RAM for saving.
100%|██████████| 32/32 [00:00<00:00, 96.41it/s]
Unsloth: Saving tokenizer...
tokenizer config file saved in Meta-Llama-3-8B-Instruct-Gaeilge/tokenizer_config.json
Special tokens file saved in Meta-Llama-3-8B-Instruct-Gaeilge/special_tokens_map.json
Uploading the following files to RonanMcGovern/Meta-Llama-3-8B-Instruct-Gaeilge: tokenizer.json,special_tokens_map.json,tokenizer_config.json
Model config LlamaConfig {
  "_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
  "architectures": [
    "LlamaForCausalLM"
  ],
  "attention_bias": false,
  "attention_dropout": 0.0,
  "bos_token_id": 128000,
  "eos_token_id": 128009,
  "hidden_act": "silu",
  "hidden_size": 4096,
  "initializer_range": 0.02,
  "intermediate_size": 14336,
  "max_position_embeddings": 8192,
  "mlp_bias": false,
  "model_type": "llama",
  "num_attention_heads": 32,
  "num_hidden_layers": 32,
  "num_key_value_heads": 8,
  "pad_token_id": 128255,
  "pretraining_tp": 1,
  "rms_norm_eps": 1e-05,
  "rope_scaling": null,
  "rope_theta": 500000.0,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.41.0",
  "unsloth_version": "2024.5",
  "use_cache": false,
  "vocab_size": 128256
}

Configuration saved in Meta-Llama-3-8B-Instruct-Gaeilge/config.json
Configuration saved in Meta-Llama-3-8B-Instruct-Gaeilge/generation_config.json
 Done.
Unsloth: Saving model... This might take 5 minutes for Llama-7b...
The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 4 checkpoint shards. You can find where each parameters has been saved in the index located at Meta-Llama-3-8B-Instruct-Gaeilge/model.safetensors.index.json.
Uploading the following files to RonanMcGovern/Meta-Llama-3-8B-Instruct-Gaeilge: README.md,model.safetensors.index.json,model-00004-of-00004.safetensors,model-00003-of-00004.safetensors,model-00002-of-00004.safetensors,model-00001-of-00004.safetensors,generation_config.json,config.json
100%
 4/4 [01:03<00:00, 17.43s/it]
model-00004-of-00004.safetensors: 
 1.18G/? [00:04<00:00, 669MB/s]
model-00003-of-00004.safetensors: 
 4.93G/? [00:19<00:00, 614MB/s]
model-00002-of-00004.safetensors: 
 5.01G/? [00:18<00:00, 779MB/s]
model-00001-of-00004.safetensors: 
 4.99G/? [00:19<00:00, 630MB/s]
Done.
Saved merged model to https://huggingface.co/Trelis/Meta-Llama-3-8B-Instruct-Gaeilge

Oddly, there is still a readme that gets pushed to the Trelis Repo... but the model and tokenizer go to the RonanMcGovern repo

RonanKMcGovern avatar May 21 '24 17:05 RonanKMcGovern

Oh my I will check this

danielhanchen avatar May 21 '24 20:05 danielhanchen

Hi @RonanKMcGovern apologies for the error. Hopefully the issue is solved now? Thanks!

shimmyshimmer avatar Oct 28 '24 03:10 shimmyshimmer