axolotl icon indicating copy to clipboard operation
axolotl copied to clipboard

Using deepspeed and activation_offloading together result in wrong parameter key in saved weight

Open zinccat opened this issue 3 months ago • 15 comments

Please check that this issue hasn't been reported before.

  • [x] I searched previous Bug Reports didn't find any similar reports.

Expected Behavior

the keys in saved safetensors file should be the same as the original checkpoint

Current behaviour

Parameter keys are like model.layers.8._checkpoint_wrapped_model.mlp.fown_proj.weight, having additional activation checkpointing key

Steps to reproduce

base_model: Qwen/Qwen3-0.6B gradient_checkpointing: true activation_offloading: true deepspeed: deepspeed_configs/zero3_bf16.json

Config yaml


Possible solution

Correctly unwrap the model before saving

Which Operating Systems are you using?

  • [x] Linux
  • [ ] macOS
  • [ ] Windows

Python Version

3.12

axolotl branch-commit

main/9640338d37d0398cd3c0c0ab6e629b6dd9dcd5d3

Acknowledgements

  • [x] My issue title is concise, descriptive, and in title casing.
  • [x] I have searched the existing issues to make sure this bug has not been reported yet.
  • [x] I am using the latest version of axolotl.
  • [x] I have provided enough information for the maintainers to reproduce and diagnose the issue.

zinccat avatar Sep 17 '25 23:09 zinccat

Thanks for the report. We're aware of this. We're thinking of having a post-training script that rewrites the keys as a workaround at the moment.

NanoCode012 avatar Sep 18 '25 04:09 NanoCode012

I'm currently using a similar approach

zinccat avatar Sep 18 '25 05:09 zinccat

@zinccat , do you have a working script for the above that you can share? I didn't want to duplicate the effort if you've done so already.

NanoCode012 avatar Sep 24 '25 06:09 NanoCode012

It's quite simple, just replace the _checkpoint_wrapped_model part in the model weight's key

zinccat avatar Sep 24 '25 07:09 zinccat

Yeah, leaving this gist for others https://gist.github.com/NanoCode012/0c971d00a32a7d691bd0c19fc3a6d6e1

@shang-zhu, please give this script a try while we debug the real reason

NanoCode012 avatar Sep 24 '25 07:09 NanoCode012

any fix on this? I'm running into a similar issue after using axolotl to fine tune gpt-oss 20b but my error is for

(VllmWorker TP0 pid=404) ERROR 11-09 22:34:09 [multiproc_executor.py:559] KeyError: 'model.layers.2.mlp.experts.w2_bias'

so a different key error

NicholasGuerrero avatar Nov 10 '25 06:11 NicholasGuerrero

@NicholasGuerrero , hey, could you provide more trace?

The issue above is about being nested under an additional, _checkpoint_wrapped... key, which I don't see in yours.

NanoCode012 avatar Nov 11 '25 14:11 NanoCode012

@NanoCode012 sorry for the delay. I take the trained model and try to deploy with vLLM

docker compose -f docker-compose.gptoss-20b-tool-trained.yml up

which produces the error VllmWorker TP0 pid=404) ERROR 11-09 22:34:09 [multiproc_executor.py:559] KeyError: 'model.layers.2.mlp.experts.w2_bias'.

I'll give you more logs shortly.

but the model was trained straight from the example here: https://github.com/axolotl-ai-cloud/axolotl/blob/main/examples/gpt-oss/gpt-oss-20b-fft-fsdp2-offload.yaml

NicholasGuerrero avatar Nov 18 '25 23:11 NicholasGuerrero

docker logs -f vllm-trained DEBUG 11-20 00:38:40 [init.py:30] No plugins for group vllm.platform_plugins found. DEBUG 11-20 00:38:40 [init.py:34] Checking if TPU platform is available. DEBUG 11-20 00:38:40 [init.py:52] TPU platform is not available because: No module named 'libtpu' DEBUG 11-20 00:38:40 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:40 [init.py:78] Confirmed CUDA platform is available. DEBUG 11-20 00:38:40 [init.py:106] Checking if ROCm platform is available. DEBUG 11-20 00:38:40 [init.py:120] ROCm platform is not available because: No module named 'amdsmi' DEBUG 11-20 00:38:40 [init.py:127] Checking if XPU platform is available. DEBUG 11-20 00:38:40 [init.py:146] XPU platform is not available because: No module named 'intel_extension_for_pytorch' DEBUG 11-20 00:38:40 [init.py:153] Checking if CPU platform is available. DEBUG 11-20 00:38:40 [init.py:175] Checking if Neuron platform is available. DEBUG 11-20 00:38:40 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:40 [init.py:78] Confirmed CUDA platform is available. INFO 11-20 00:38:40 [init.py:241] Automatically detected platform cuda. DEBUG 11-20 00:38:42 [utils.py:168] Setting VLLM_WORKER_MULTIPROC_METHOD to 'spawn' DEBUG 11-20 00:38:42 [init.py:38] Available plugins for group vllm.general_plugins: DEBUG 11-20 00:38:42 [init.py:40] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver DEBUG 11-20 00:38:42 [init.py:43] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. (APIServer pid=1) INFO 11-20 00:38:42 [api_server.py:1787] vLLM API server version 0.10.2.dev2+gf5635d62e.d20250807 (APIServer pid=1) INFO 11-20 00:38:42 [utils.py:326] non-default args: {'model_tag': 'True', 'return_tokens_as_token_ids': True, 'model': '/workspace/model', 'max_model_len': 4096, 'max_logprobs': 1, 'distributed_executor_backend': 'mp', 'tensor_parallel_size': 2, 'gpu_memory_utilization': 0.82} (APIServer pid=1) INFO 11-20 00:38:48 [config.py:726] Resolved architecture: GptOssForCausalLM (APIServer pid=1) ERROR 11-20 00:38:48 [config.py:123] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/model'. Use repo_type argument if needed., retrying 1 of 2 (APIServer pid=1) ERROR 11-20 00:38:50 [config.py:121] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/model'. Use repo_type argument if needed. (APIServer pid=1) INFO 11-20 00:38:50 [config.py:3628] Downcasting torch.float32 to torch.bfloat16. (APIServer pid=1) INFO 11-20 00:38:50 [config.py:1759] Using max model len 4096 (APIServer pid=1) DEBUG 11-20 00:38:50 [arg_utils.py:1706] Setting max_num_batched_tokens to 2048 for OPENAI_API_SERVER usage context. (APIServer pid=1) DEBUG 11-20 00:38:50 [arg_utils.py:1715] Setting max_num_seqs to 256 for OPENAI_API_SERVER usage context. (APIServer pid=1) INFO 11-20 00:38:51 [config.py:2588] Chunked prefill is enabled with max_num_batched_tokens=2048. (APIServer pid=1) INFO 11-20 00:38:51 [config.py:244] Overriding cuda graph sizes to [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512, 528, 544, 560, 576, 592, 608, 624, 640, 656, 672, 688, 704, 720, 736, 752, 768, 784, 800, 816, 832, 848, 864, 880, 896, 912, 928, 944, 960, 976, 992, 1008, 1024] DEBUG 11-20 00:38:55 [init.py:30] No plugins for group vllm.platform_plugins found. DEBUG 11-20 00:38:55 [init.py:34] Checking if TPU platform is available. DEBUG 11-20 00:38:55 [init.py:52] TPU platform is not available because: No module named 'libtpu' DEBUG 11-20 00:38:55 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:55 [init.py:78] Confirmed CUDA platform is available. DEBUG 11-20 00:38:55 [init.py:106] Checking if ROCm platform is available. DEBUG 11-20 00:38:55 [init.py:120] ROCm platform is not available because: No module named 'amdsmi' DEBUG 11-20 00:38:55 [init.py:127] Checking if XPU platform is available. DEBUG 11-20 00:38:55 [init.py:146] XPU platform is not available because: No module named 'intel_extension_for_pytorch' DEBUG 11-20 00:38:55 [init.py:153] Checking if CPU platform is available. DEBUG 11-20 00:38:55 [init.py:175] Checking if Neuron platform is available. DEBUG 11-20 00:38:55 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:55 [init.py:78] Confirmed CUDA platform is available. INFO 11-20 00:38:55 [init.py:241] Automatically detected platform cuda. (EngineCore_0 pid=270) INFO 11-20 00:38:56 [core.py:654] Waiting for init message from front-end. (APIServer pid=1) DEBUG 11-20 00:38:56 [utils.py:822] HELLO from local core engine process 0. (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [core.py:662] Received init message: EngineHandshakeMetadata(addresses=EngineZmqAddresses(inputs=['ipc:///tmp/327745b9-4d71-496e-8422-d1ade4d8aa80'], outputs=['ipc:///tmp/655e3284-5913-4400-bcf7-1e13aab142a7'], coordinator_input=None, coordinator_output=None, frontend_stats_publish_address=None), parallel_config={'data_parallel_master_ip': '127.0.0.1', 'data_parallel_master_port': 0, 'data_parallel_size': 1}) (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [core.py:499] Has DP Coordinator: False, stats publish address: None (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [init.py:38] Available plugins for group vllm.general_plugins: (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [init.py:40] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [init.py:43] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. (EngineCore_0 pid=270) INFO 11-20 00:38:56 [core.py:73] Initializing a V1 LLM engine (v0.10.2.dev2+gf5635d62e.d20250807) with config: model='/workspace/model', speculative_config=None, tokenizer='/workspace/model', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend='openai'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/workspace/model, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output","vllm.mamba_mixer2"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[1024,1008,992,976,960,944,928,912,896,880,864,848,832,816,800,784,768,752,736,720,704,688,672,656,640,624,608,592,576,560,544,528,512,496,480,464,448,432,416,400,384,368,352,336,320,304,288,272,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":1024,"local_cache_dir":null} (EngineCore_0 pid=270) (EngineCore_0 pid=270) LL LL MMM MMM (EngineCore_0 pid=270) LL LL MMMM MMMM (EngineCore_0 pid=270) V LL LL MM MM MM MM (EngineCore_0 pid=270) vvvv VVVV LL LL MM MM MM MM (EngineCore_0 pid=270) vvvv VVVV LL LL MM MMM MM (EngineCore_0 pid=270) vvv VVVV LL LL MM M MM (EngineCore_0 pid=270) vvVVVV LL LL MM MM (EngineCore_0 pid=270) VVVV LLLLLLLLLL LLLLLLLLL M M (EngineCore_0 pid=270) (EngineCore_0 pid=270) WARNING 11-20 00:38:56 [multiproc_worker_utils.py:273] Reducing Torch parallelism from 32 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. (EngineCore_0 pid=270) DEBUG 11-20 00:38:56 [shm_broadcast.py:243] Binding to ipc:///tmp/ef6af3e2-8503-4266-969a-f5b248f38545 (EngineCore_0 pid=270) INFO 11-20 00:38:56 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 16777216, 10, 'psm_082712c7'), local_subscribe_addr='ipc:///tmp/ef6af3e2-8503-4266-969a-f5b248f38545', remote_subscribe_addr=None, remote_addr_ipv6=False) DEBUG 11-20 00:38:59 [init.py:30] No plugins for group vllm.platform_plugins found. DEBUG 11-20 00:38:59 [init.py:34] Checking if TPU platform is available. DEBUG 11-20 00:38:59 [init.py:52] TPU platform is not available because: No module named 'libtpu' DEBUG 11-20 00:38:59 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:30] No plugins for group vllm.platform_plugins found. DEBUG 11-20 00:38:59 [init.py:34] Checking if TPU platform is available. DEBUG 11-20 00:38:59 [init.py:52] TPU platform is not available because: No module named 'libtpu' DEBUG 11-20 00:38:59 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:78] Confirmed CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:106] Checking if ROCm platform is available. DEBUG 11-20 00:38:59 [init.py:120] ROCm platform is not available because: No module named 'amdsmi' DEBUG 11-20 00:38:59 [init.py:127] Checking if XPU platform is available. DEBUG 11-20 00:38:59 [init.py:146] XPU platform is not available because: No module named 'intel_extension_for_pytorch' DEBUG 11-20 00:38:59 [init.py:153] Checking if CPU platform is available. DEBUG 11-20 00:38:59 [init.py:175] Checking if Neuron platform is available. DEBUG 11-20 00:38:59 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:78] Confirmed CUDA platform is available. INFO 11-20 00:38:59 [init.py:241] Automatically detected platform cuda. DEBUG 11-20 00:38:59 [init.py:78] Confirmed CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:106] Checking if ROCm platform is available. DEBUG 11-20 00:38:59 [init.py:120] ROCm platform is not available because: No module named 'amdsmi' DEBUG 11-20 00:38:59 [init.py:127] Checking if XPU platform is available. DEBUG 11-20 00:38:59 [init.py:146] XPU platform is not available because: No module named 'intel_extension_for_pytorch' DEBUG 11-20 00:38:59 [init.py:153] Checking if CPU platform is available. DEBUG 11-20 00:38:59 [init.py:175] Checking if Neuron platform is available. DEBUG 11-20 00:38:59 [init.py:58] Checking if CUDA platform is available. DEBUG 11-20 00:38:59 [init.py:78] Confirmed CUDA platform is available. INFO 11-20 00:38:59 [init.py:241] Automatically detected platform cuda. DEBUG 11-20 00:39:00 [init.py:38] Available plugins for group vllm.general_plugins: DEBUG 11-20 00:39:00 [init.py:40] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver DEBUG 11-20 00:39:00 [init.py:43] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. DEBUG 11-20 00:39:00 [init.py:38] Available plugins for group vllm.general_plugins: DEBUG 11-20 00:39:00 [init.py:40] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver DEBUG 11-20 00:39:00 [init.py:43] All plugins in this group will be loaded. Set VLLM_PLUGINS to control which plugins to load. W1120 00:39:01.989000 404 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. W1120 00:39:01.989000 404 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'] to specific architectures. W1120 00:39:01.989000 405 torch/utils/cpp_extension.py:2425] TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation. W1120 00:39:01.989000 405 torch/utils/cpp_extension.py:2425] If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'] to specific architectures. DEBUG 11-20 00:39:02 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.llama.LlamaModel'>: ['input_ids', 'positions', 'intermediate_tensors', 'inputs_embeds'] DEBUG 11-20 00:39:02 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.llama_eagle3.LlamaModel'>: ['input_ids', 'positions', 'hidden_states'] DEBUG 11-20 00:39:02 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.llama.LlamaModel'>: ['input_ids', 'positions', 'intermediate_tensors', 'inputs_embeds'] DEBUG 11-20 00:39:02 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.llama_eagle3.LlamaModel'>: ['input_ids', 'positions', 'hidden_states'] DEBUG 11-20 00:39:02 [init.py:3014] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f6295730f80> DEBUG 11-20 00:39:02 [config.py:5083] enabled custom ops: Counter() DEBUG 11-20 00:39:02 [config.py:5085] disabled custom ops: Counter() DEBUG 11-20 00:39:02 [init.py:3014] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f4fd8776ae0> DEBUG 11-20 00:39:02 [config.py:5083] enabled custom ops: Counter() DEBUG 11-20 00:39:02 [config.py:5085] disabled custom ops: Counter() (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:02 [shm_broadcast.py:313] Connecting to ipc:///tmp/ef6af3e2-8503-4266-969a-f5b248f38545 (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:02 [shm_broadcast.py:313] Connecting to ipc:///tmp/ef6af3e2-8503-4266-969a-f5b248f38545 (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:02 [shm_broadcast.py:243] Binding to ipc:///tmp/2077fd2d-cddb-4fe1-b2b9-de37d91aedbc (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:02 [shm_broadcast.py:243] Binding to ipc:///tmp/bca37161-7466-4baf-b1b6-c9ecf674a259 (VllmWorker TP1 pid=405) INFO 11-20 00:39:02 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_00beafcd'), local_subscribe_addr='ipc:///tmp/2077fd2d-cddb-4fe1-b2b9-de37d91aedbc', remote_subscribe_addr=None, remote_addr_ipv6=False) (VllmWorker TP0 pid=404) INFO 11-20 00:39:02 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_fe565dcd'), local_subscribe_addr='ipc:///tmp/bca37161-7466-4baf-b1b6-c9ecf674a259', remote_subscribe_addr=None, remote_addr_ipv6=False) (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:03 [parallel_state.py:945] world_size=2 rank=1 local_rank=1 distributed_init_method=tcp://127.0.0.1:41085 backend=nccl (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [parallel_state.py:945] world_size=2 rank=0 local_rank=0 distributed_init_method=tcp://127.0.0.1:41085 backend=nccl [W1120 00:39:03.478718300 ProcessGroupNCCL.cpp:915] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator()) [W1120 00:39:03.479649382 ProcessGroupNCCL.cpp:915] Warning: TORCH_NCCL_AVOID_RECORD_STREAMS is the default now, this environment variable is thus deprecated. (function operator()) [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [parallel_state.py:996] Detected 1 nodes in the distributed environment (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:03 [parallel_state.py:996] Detected 1 nodes in the distributed environment [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [init.py:1381] Found nccl from library libnccl.so.2 (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [init.py:1381] Found nccl from library libnccl.so.2 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [pynccl.py:70] vLLM is using nccl==2.27.5 (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [pynccl.py:70] vLLM is using nccl==2.27.5 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report. (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report. (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [shm_broadcast.py:243] Binding to ipc:///tmp/0ed15d35-61ca-46df-9dac-0f7ad0c24738 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_86c82605'), local_subscribe_addr='ipc:///tmp/0ed15d35-61ca-46df-9dac-0f7ad0c24738', remote_subscribe_addr=None, remote_addr_ipv6=False) (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:03 [shm_broadcast.py:313] Connecting to ipc:///tmp/0ed15d35-61ca-46df-9dac-0f7ad0c24738 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0 [Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 [Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [parallel_state.py:1102] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0 (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [parallel_state.py:1102] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1, EP rank 1 (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling. (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling. (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:03 [config.py:5083] enabled custom ops: Counter() (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:03 [config.py:5085] disabled custom ops: Counter() (VllmWorker TP1 pid=405) INFO 11-20 00:39:03 [gpu_model_runner.py:1913] Starting to load model /workspace/model... (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.gpt_oss.GptOssModel'>: ['input_ids', 'positions'] (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [config.py:5083] enabled custom ops: Counter() (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:03 [config.py:5085] disabled custom ops: Counter() (VllmWorker TP0 pid=404) INFO 11-20 00:39:03 [gpu_model_runner.py:1913] Starting to load model /workspace/model... (VllmWorker TP1 pid=405) INFO 11-20 00:39:04 [gpu_model_runner.py:1945] Loading model from scratch... (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:04 [decorators.py:139] Inferred dynamic dimensions for forward method of <class 'vllm.model_executor.models.gpt_oss.GptOssModel'>: ['input_ids', 'positions'] (VllmWorker TP0 pid=404) INFO 11-20 00:39:04 [gpu_model_runner.py:1945] Loading model from scratch... (VllmWorker TP0 pid=404) INFO 11-20 00:39:04 [cuda.py:286] Using Triton backend on V1 engine. (VllmWorker TP1 pid=405) INFO 11-20 00:39:04 [cuda.py:286] Using Triton backend on V1 engine. (VllmWorker TP0 pid=404) WARNING 11-20 00:39:04 [rocm.py:29] Failed to import from amdsmi with ModuleNotFoundError("No module named 'amdsmi'") (VllmWorker TP0 pid=404) WARNING 11-20 00:39:04 [rocm.py:40] Failed to import from vllm._rocm_C with ModuleNotFoundError("No module named 'vllm._rocm_C'") (VllmWorker TP1 pid=405) WARNING 11-20 00:39:04 [rocm.py:29] Failed to import from amdsmi with ModuleNotFoundError("No module named 'amdsmi'") (VllmWorker TP1 pid=405) WARNING 11-20 00:39:04 [rocm.py:40] Failed to import from vllm._rocm_C with ModuleNotFoundError("No module named 'vllm._rocm_C'") (VllmWorker TP0 pid=404) INFO 11-20 00:39:04 [triton_attn.py:263] Using vllm unified attention for TritonAttentionImpl (VllmWorker TP1 pid=405) INFO 11-20 00:39:04 [triton_attn.py:263] Using vllm unified attention for TritonAttentionImpl (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:04 [backends.py:36] Using InductorStandaloneAdaptor (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:04 [backends.py:36] Using InductorStandaloneAdaptor (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:04 [config.py:5083] enabled custom ops: Counter() (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:04 [config.py:5085] disabled custom ops: Counter({'rms_norm': 49, 'unquantized_fused_moe': 24, 'rotary_embedding': 1}) (VllmWorker TP0 pid=404) DEBUG 11-20 00:39:04 [base_loader.py:47] Loading weights on cuda ... (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:04 [config.py:5083] enabled custom ops: Counter() (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:04 [config.py:5085] disabled custom ops: Counter({'rms_norm': 49, 'unquantized_fused_moe': 24, 'rotary_embedding': 1}) (VllmWorker TP1 pid=405) DEBUG 11-20 00:39:04 [base_loader.py:47] Loading weights on cuda ... Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] (VllmWorker TP1 pid=405) Warning: model.layers.2.mlp.experts.down_proj not found in params_dict (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] WorkerProc failed to start. (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] Traceback (most recent call last): (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 533, in worker_main (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] worker = WorkerProc(*args, **kwargs) (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 402, in init (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.worker.load_model() (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 211, in load_model (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.model_runner.load_model(eep_scale_up=eep_scale_up) (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1946, in load_model (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.model = model_loader.load_model( (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.load_weights(model, model_config) (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/default_loader.py", line 259, in load_weights (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] loaded_weights = model.load_weights( (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^ (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 429, in load_weights (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] param = params_dict[new_name] (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ~~~~~~~~~~~^^^^^^^^^^ (VllmWorker TP1 pid=405) ERROR 11-20 00:39:04 [multiproc_executor.py:559] KeyError: 'model.layers.2.mlp.experts.w2_bias' (VllmWorker TP0 pid=404) Warning: model.layers.2.mlp.experts.down_proj not found in params_dict (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] WorkerProc failed to start. (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] Traceback (most recent call last): (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 533, in worker_main (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] worker = WorkerProc(*args, **kwargs) (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 402, in init (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.worker.load_model() (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py", line 211, in load_model (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.model_runner.load_model(eep_scale_up=eep_scale_up) (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1946, in load_model (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.model = model_loader.load_model( (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^^^^^^ (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] self.load_weights(model, model_config) (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/default_loader.py", line 259, in load_weights Loading safetensors checkpoint shards: 0% Completed | 0/9 [00:00<?, ?it/s] (VllmWorker TP0 pid=404) (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] loaded_weights = model.load_weights( (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ^^^^^^^^^^^^^^^^^^^ (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/gpt_oss.py", line 429, in load_weights (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] param = params_dict[new_name] (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] ~~~~~~~~~~~^^^^^^^^^^ (VllmWorker TP0 pid=404) ERROR 11-20 00:39:04 [multiproc_executor.py:559] KeyError: 'model.layers.2.mlp.experts.w2_bias' (VllmWorker TP0 pid=404) INFO 11-20 00:39:04 [multiproc_executor.py:520] Parent process exited, terminating worker (VllmWorker TP1 pid=405) INFO 11-20 00:39:04 [multiproc_executor.py:520] Parent process exited, terminating worker [rank0]:[W1120 00:39:05.150572342 ProcessGroupNCCL.cpp:1522] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] EngineCore failed to start. (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] Traceback (most recent call last): (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 709, in run_engine_core (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] engine_core = EngineCoreProc(*args, **kwargs) (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 510, in init (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] super().init(vllm_config, executor_class, log_stats, (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 82, in init (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] self.model_executor = executor_class(vllm_config) (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 54, in init (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] self._init_executor() (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 96, in _init_executor (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] self.workers = WorkerProc.wait_for_ready(unready_workers) (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 472, in wait_for_ready (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] raise e from None (EngineCore_0 pid=270) Process EngineCore_0: (EngineCore_0 pid=270) ERROR 11-20 00:39:05 [core.py:718] Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause. (EngineCore_0 pid=270) Traceback (most recent call last): (EngineCore_0 pid=270) File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap (EngineCore_0 pid=270) self.run() (EngineCore_0 pid=270) File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run (EngineCore_0 pid=270) self._target(*self._args, **self._kwargs) (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 722, in run_engine_core (EngineCore_0 pid=270) raise e (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 709, in run_engine_core (EngineCore_0 pid=270) engine_core = EngineCoreProc(*args, **kwargs) (EngineCore_0 pid=270) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 510, in init (EngineCore_0 pid=270) super().init(vllm_config, executor_class, log_stats, (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 82, in init (EngineCore_0 pid=270) self.model_executor = executor_class(vllm_config) (EngineCore_0 pid=270) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 54, in init (EngineCore_0 pid=270) self._init_executor() (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 96, in _init_executor (EngineCore_0 pid=270) self.workers = WorkerProc.wait_for_ready(unready_workers) (EngineCore_0 pid=270) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (EngineCore_0 pid=270) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/multiproc_executor.py", line 472, in wait_for_ready (EngineCore_0 pid=270) raise e from None (EngineCore_0 pid=270) Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause. (APIServer pid=1) DEBUG 11-20 00:39:06 [utils.py:741] Waiting for 1 local, 0 remote core engine proc(s) to start. (APIServer pid=1) Traceback (most recent call last): (APIServer pid=1) File "", line 198, in _run_module_as_main (APIServer pid=1) File "", line 88, in _run_code (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1895, in (APIServer pid=1) uvloop.run(run_server(args)) (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 109, in run (APIServer pid=1) return __asyncio.run( (APIServer pid=1) ^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run (APIServer pid=1) return runner.run(main) (APIServer pid=1) ^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run (APIServer pid=1) return self._loop.run_until_complete(task) (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/uvloop/init.py", line 61, in wrapper (APIServer pid=1) return await main (APIServer pid=1) ^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1827, in run_server (APIServer pid=1) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs) (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1847, in run_server_worker (APIServer pid=1) async with build_async_engine_client( (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter (APIServer pid=1) return await anext(self.gen) (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 167, in build_async_engine_client (APIServer pid=1) async with build_async_engine_client_from_engine_args( (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 210, in aenter (APIServer pid=1) return await anext(self.gen) (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 209, in build_async_engine_client_from_engine_args (APIServer pid=1) async_llm = AsyncLLM.from_vllm_config( (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/utils/init.py", line 1520, in inner (APIServer pid=1) return fn(*args, **kwargs) (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 173, in from_vllm_config (APIServer pid=1) return cls( (APIServer pid=1) ^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 119, in init (APIServer pid=1) self.engine_core = EngineCoreClient.make_async_mp_client( (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 101, in make_async_mp_client (APIServer pid=1) return AsyncMPClient(*client_args) (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 733, in init (APIServer pid=1) super().init( (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 421, in init (APIServer pid=1) with launch_core_engines(vllm_config, executor_class, (APIServer pid=1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (APIServer pid=1) File "/usr/lib/python3.12/contextlib.py", line 144, in exit (APIServer pid=1) next(self.gen) (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 697, in launch_core_engines (APIServer pid=1) wait_for_engine_startup( (APIServer pid=1) File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py", line 750, in wait_for_engine_startup (APIServer pid=1) raise RuntimeError("Engine core initialization failed. " (APIServer pid=1) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {} /usr/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

NicholasGuerrero avatar Nov 20 '25 08:11 NicholasGuerrero

cat docker-compose.gptoss-20b-tool-trained.yml services: vllm-openai: image: vllm/vllm-openai:gptoss container_name: vllm-trained runtime: nvidia deploy: resources: reservations: devices: - driver: nvidia count: all # Specify exact GPU count capabilities: [gpu] ports: - "8000:8000" volumes: - /data-01/axolotl/fixed_outputs:/workspace/model #- /data-01/nicholas.guerrero/:/root/.cache/huggingface ipc: host shm_size: 16gb environment: - CUDA_VISIBLE_DEVICES=0,1 - VLLM_ATTENTION_BACKEND=TRITON_ATTN_VLLM_V1 - VLLM_LOGGING_LEVEL=DEBUG command: > --model /workspace/model --tensor-parallel-size 2 --gpu-memory-utilization 0.82 --distributed-executor-backend mp --max-model-len 4096 --return-tokens-as-token-ids True --max-logprobs 1

NicholasGuerrero avatar Nov 20 '25 08:11 NicholasGuerrero

ls /data-01/axolotl/fixed_outputs chat_template.jinja model-00002-of-00009.safetensors model-00006-of-00009.safetensors model.safetensors.index.json tokenizer.json config.json model-00003-of-00009.safetensors model-00007-of-00009.safetensors README.md debug.log model-00004-of-00009.safetensors model-00008-of-00009.safetensors special_tokens_map.json model-00001-of-00009.safetensors model-00005-of-00009.safetensors model-00009-of-00009.safetensors tokenizer_config.json

NicholasGuerrero avatar Nov 20 '25 08:11 NicholasGuerrero

@NanoCode012 Please find above the critical information. Taking note that the model safetensor files were generated using the gpt-oss example link and then run through the fixed_output python script. I will be doing my own debugging this weekend. Let me know if you would like a different format for the logs

NicholasGuerrero avatar Nov 20 '25 08:11 NicholasGuerrero

@NicholasGuerrero , thanks for the detailed logs. Are you able to print out the model layers in your checkpoints? Can you see if the keys still contains "_checkpoint_wrapped"?

To double check, this was after running through the gist above on fixing checkpoints?

NanoCode012 avatar Nov 21 '25 05:11 NanoCode012

import os from safetensors.torch import load_file

Directory containing the files

directory = "/home/workspace/fixed_outputs" log_file = os.path.join(directory, "safetensors_log.txt")

Open log file

with open(log_file, "w") as log: # Iterate over each .safetensors file for filename in sorted(os.listdir(directory)): if filename.endswith(".safetensors"): filepath = os.path.join(directory, filename) try: # Load the safetensor as a dictionary tensor_dict = load_file(filepath) # Write to log log.write(f"Contents of {filename}:\n") for key, value in tensor_dict.items(): log.write(f"{key}: {value.shape}\n") # printing shapes to avoid huge output log.write("\n") except Exception as e: log.write(f"Failed to load {filename}: {e}\n\n")

print(f"Log written to {log_file}")

Yields:

cat safetensors_log.txt Contents of model-00001-of-00009.safetensors: model.embed_tokens.weight: torch.Size([201088, 2880]) model.layers.0.input_layernorm.weight: torch.Size([2880]) model.layers.0.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.0.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.0.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.0.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.0.mlp.router.bias: torch.Size([32]) model.layers.0.mlp.router.weight: torch.Size([32, 2880]) model.layers.0.post_attention_layernorm.weight: torch.Size([2880]) model.layers.0.self_attn.k_proj.bias: torch.Size([512]) model.layers.0.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.0.self_attn.o_proj.bias: torch.Size([2880]) model.layers.0.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.0.self_attn.q_proj.bias: torch.Size([4096]) model.layers.0.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.0.self_attn.sinks: torch.Size([64]) model.layers.0.self_attn.v_proj.bias: torch.Size([512]) model.layers.0.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.1.input_layernorm.weight: torch.Size([2880]) model.layers.1.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.1.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.1.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.1.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.1.mlp.router.bias: torch.Size([32]) model.layers.1.mlp.router.weight: torch.Size([32, 2880]) model.layers.1.post_attention_layernorm.weight: torch.Size([2880]) model.layers.1.self_attn.k_proj.bias: torch.Size([512]) model.layers.1.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.1.self_attn.o_proj.bias: torch.Size([2880]) model.layers.1.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.1.self_attn.q_proj.bias: torch.Size([4096]) model.layers.1.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.1.self_attn.sinks: torch.Size([64]) model.layers.1.self_attn.v_proj.bias: torch.Size([512]) model.layers.1.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.2.mlp.router.bias: torch.Size([32]) model.layers.2.mlp.router.weight: torch.Size([32, 2880]) model.layers.2.self_attn.k_proj.bias: torch.Size([512]) model.layers.2.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.2.self_attn.o_proj.bias: torch.Size([2880]) model.layers.2.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.2.self_attn.q_proj.bias: torch.Size([4096]) model.layers.2.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.2.self_attn.sinks: torch.Size([64]) model.layers.2.self_attn.v_proj.bias: torch.Size([512]) model.layers.2.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00002-of-00009.safetensors: model.layers.2.input_layernorm.weight: torch.Size([2880]) model.layers.2.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.2.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.2.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.2.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.2.post_attention_layernorm.weight: torch.Size([2880]) model.layers.3.input_layernorm.weight: torch.Size([2880]) model.layers.3.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.3.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.3.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.3.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.3.mlp.router.bias: torch.Size([32]) model.layers.3.mlp.router.weight: torch.Size([32, 2880]) model.layers.3.post_attention_layernorm.weight: torch.Size([2880]) model.layers.3.self_attn.k_proj.bias: torch.Size([512]) model.layers.3.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.3.self_attn.o_proj.bias: torch.Size([2880]) model.layers.3.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.3.self_attn.q_proj.bias: torch.Size([4096]) model.layers.3.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.3.self_attn.sinks: torch.Size([64]) model.layers.3.self_attn.v_proj.bias: torch.Size([512]) model.layers.3.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.4.input_layernorm.weight: torch.Size([2880]) model.layers.4.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.4.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.4.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.4.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.4.mlp.router.bias: torch.Size([32]) model.layers.4.mlp.router.weight: torch.Size([32, 2880]) model.layers.4.post_attention_layernorm.weight: torch.Size([2880]) model.layers.4.self_attn.k_proj.bias: torch.Size([512]) model.layers.4.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.4.self_attn.o_proj.bias: torch.Size([2880]) model.layers.4.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.4.self_attn.q_proj.bias: torch.Size([4096]) model.layers.4.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.4.self_attn.sinks: torch.Size([64]) model.layers.4.self_attn.v_proj.bias: torch.Size([512]) model.layers.4.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.5.mlp.router.bias: torch.Size([32]) model.layers.5.mlp.router.weight: torch.Size([32, 2880]) model.layers.5.self_attn.k_proj.bias: torch.Size([512]) model.layers.5.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.5.self_attn.o_proj.bias: torch.Size([2880]) model.layers.5.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.5.self_attn.q_proj.bias: torch.Size([4096]) model.layers.5.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.5.self_attn.sinks: torch.Size([64]) model.layers.5.self_attn.v_proj.bias: torch.Size([512]) model.layers.5.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00003-of-00009.safetensors: model.layers.5.input_layernorm.weight: torch.Size([2880]) model.layers.5.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.5.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.5.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.5.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.5.post_attention_layernorm.weight: torch.Size([2880]) model.layers.6.input_layernorm.weight: torch.Size([2880]) model.layers.6.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.6.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.6.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.6.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.6.mlp.router.bias: torch.Size([32]) model.layers.6.mlp.router.weight: torch.Size([32, 2880]) model.layers.6.post_attention_layernorm.weight: torch.Size([2880]) model.layers.6.self_attn.k_proj.bias: torch.Size([512]) model.layers.6.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.6.self_attn.o_proj.bias: torch.Size([2880]) model.layers.6.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.6.self_attn.q_proj.bias: torch.Size([4096]) model.layers.6.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.6.self_attn.sinks: torch.Size([64]) model.layers.6.self_attn.v_proj.bias: torch.Size([512]) model.layers.6.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.7.input_layernorm.weight: torch.Size([2880]) model.layers.7.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.7.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.7.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.7.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.7.mlp.router.bias: torch.Size([32]) model.layers.7.mlp.router.weight: torch.Size([32, 2880]) model.layers.7.post_attention_layernorm.weight: torch.Size([2880]) model.layers.7.self_attn.k_proj.bias: torch.Size([512]) model.layers.7.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.7.self_attn.o_proj.bias: torch.Size([2880]) model.layers.7.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.7.self_attn.q_proj.bias: torch.Size([4096]) model.layers.7.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.7.self_attn.sinks: torch.Size([64]) model.layers.7.self_attn.v_proj.bias: torch.Size([512]) model.layers.7.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.8.mlp.router.bias: torch.Size([32]) model.layers.8.mlp.router.weight: torch.Size([32, 2880]) model.layers.8.self_attn.k_proj.bias: torch.Size([512]) model.layers.8.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.8.self_attn.o_proj.bias: torch.Size([2880]) model.layers.8.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.8.self_attn.q_proj.bias: torch.Size([4096]) model.layers.8.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.8.self_attn.sinks: torch.Size([64]) model.layers.8.self_attn.v_proj.bias: torch.Size([512]) model.layers.8.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00004-of-00009.safetensors: model.layers.10.input_layernorm.weight: torch.Size([2880]) model.layers.10.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.10.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.10.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.10.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.10.mlp.router.bias: torch.Size([32]) model.layers.10.mlp.router.weight: torch.Size([32, 2880]) model.layers.10.post_attention_layernorm.weight: torch.Size([2880]) model.layers.10.self_attn.k_proj.bias: torch.Size([512]) model.layers.10.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.10.self_attn.o_proj.bias: torch.Size([2880]) model.layers.10.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.10.self_attn.q_proj.bias: torch.Size([4096]) model.layers.10.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.10.self_attn.sinks: torch.Size([64]) model.layers.10.self_attn.v_proj.bias: torch.Size([512]) model.layers.10.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.11.mlp.router.bias: torch.Size([32]) model.layers.11.mlp.router.weight: torch.Size([32, 2880]) model.layers.11.self_attn.k_proj.bias: torch.Size([512]) model.layers.11.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.11.self_attn.o_proj.bias: torch.Size([2880]) model.layers.11.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.11.self_attn.q_proj.bias: torch.Size([4096]) model.layers.11.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.11.self_attn.sinks: torch.Size([64]) model.layers.11.self_attn.v_proj.bias: torch.Size([512]) model.layers.11.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.8.input_layernorm.weight: torch.Size([2880]) model.layers.8.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.8.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.8.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.8.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.8.post_attention_layernorm.weight: torch.Size([2880]) model.layers.9.input_layernorm.weight: torch.Size([2880]) model.layers.9.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.9.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.9.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.9.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.9.mlp.router.bias: torch.Size([32]) model.layers.9.mlp.router.weight: torch.Size([32, 2880]) model.layers.9.post_attention_layernorm.weight: torch.Size([2880]) model.layers.9.self_attn.k_proj.bias: torch.Size([512]) model.layers.9.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.9.self_attn.o_proj.bias: torch.Size([2880]) model.layers.9.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.9.self_attn.q_proj.bias: torch.Size([4096]) model.layers.9.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.9.self_attn.sinks: torch.Size([64]) model.layers.9.self_attn.v_proj.bias: torch.Size([512]) model.layers.9.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00005-of-00009.safetensors: model.layers.11.input_layernorm.weight: torch.Size([2880]) model.layers.11.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.11.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.11.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.11.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.11.post_attention_layernorm.weight: torch.Size([2880]) model.layers.12.input_layernorm.weight: torch.Size([2880]) model.layers.12.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.12.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.12.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.12.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.12.mlp.router.bias: torch.Size([32]) model.layers.12.mlp.router.weight: torch.Size([32, 2880]) model.layers.12.post_attention_layernorm.weight: torch.Size([2880]) model.layers.12.self_attn.k_proj.bias: torch.Size([512]) model.layers.12.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.12.self_attn.o_proj.bias: torch.Size([2880]) model.layers.12.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.12.self_attn.q_proj.bias: torch.Size([4096]) model.layers.12.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.12.self_attn.sinks: torch.Size([64]) model.layers.12.self_attn.v_proj.bias: torch.Size([512]) model.layers.12.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.13.input_layernorm.weight: torch.Size([2880]) model.layers.13.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.13.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.13.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.13.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.13.mlp.router.bias: torch.Size([32]) model.layers.13.mlp.router.weight: torch.Size([32, 2880]) model.layers.13.post_attention_layernorm.weight: torch.Size([2880]) model.layers.13.self_attn.k_proj.bias: torch.Size([512]) model.layers.13.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.13.self_attn.o_proj.bias: torch.Size([2880]) model.layers.13.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.13.self_attn.q_proj.bias: torch.Size([4096]) model.layers.13.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.13.self_attn.sinks: torch.Size([64]) model.layers.13.self_attn.v_proj.bias: torch.Size([512]) model.layers.13.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.14.mlp.router.bias: torch.Size([32]) model.layers.14.mlp.router.weight: torch.Size([32, 2880]) model.layers.14.self_attn.k_proj.bias: torch.Size([512]) model.layers.14.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.14.self_attn.o_proj.bias: torch.Size([2880]) model.layers.14.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.14.self_attn.q_proj.bias: torch.Size([4096]) model.layers.14.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.14.self_attn.sinks: torch.Size([64]) model.layers.14.self_attn.v_proj.bias: torch.Size([512]) model.layers.14.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00006-of-00009.safetensors: model.layers.14.input_layernorm.weight: torch.Size([2880]) model.layers.14.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.14.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.14.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.14.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.14.post_attention_layernorm.weight: torch.Size([2880]) model.layers.15.input_layernorm.weight: torch.Size([2880]) model.layers.15.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.15.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.15.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.15.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.15.mlp.router.bias: torch.Size([32]) model.layers.15.mlp.router.weight: torch.Size([32, 2880]) model.layers.15.post_attention_layernorm.weight: torch.Size([2880]) model.layers.15.self_attn.k_proj.bias: torch.Size([512]) model.layers.15.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.15.self_attn.o_proj.bias: torch.Size([2880]) model.layers.15.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.15.self_attn.q_proj.bias: torch.Size([4096]) model.layers.15.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.15.self_attn.sinks: torch.Size([64]) model.layers.15.self_attn.v_proj.bias: torch.Size([512]) model.layers.15.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.16.input_layernorm.weight: torch.Size([2880]) model.layers.16.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.16.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.16.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.16.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.16.mlp.router.bias: torch.Size([32]) model.layers.16.mlp.router.weight: torch.Size([32, 2880]) model.layers.16.post_attention_layernorm.weight: torch.Size([2880]) model.layers.16.self_attn.k_proj.bias: torch.Size([512]) model.layers.16.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.16.self_attn.o_proj.bias: torch.Size([2880]) model.layers.16.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.16.self_attn.q_proj.bias: torch.Size([4096]) model.layers.16.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.16.self_attn.sinks: torch.Size([64]) model.layers.16.self_attn.v_proj.bias: torch.Size([512]) model.layers.16.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.17.mlp.router.bias: torch.Size([32]) model.layers.17.mlp.router.weight: torch.Size([32, 2880]) model.layers.17.self_attn.k_proj.bias: torch.Size([512]) model.layers.17.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.17.self_attn.o_proj.bias: torch.Size([2880]) model.layers.17.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.17.self_attn.q_proj.bias: torch.Size([4096]) model.layers.17.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.17.self_attn.sinks: torch.Size([64]) model.layers.17.self_attn.v_proj.bias: torch.Size([512]) model.layers.17.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00007-of-00009.safetensors: model.layers.17.input_layernorm.weight: torch.Size([2880]) model.layers.17.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.17.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.17.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.17.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.17.post_attention_layernorm.weight: torch.Size([2880]) model.layers.18.input_layernorm.weight: torch.Size([2880]) model.layers.18.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.18.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.18.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.18.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.18.mlp.router.bias: torch.Size([32]) model.layers.18.mlp.router.weight: torch.Size([32, 2880]) model.layers.18.post_attention_layernorm.weight: torch.Size([2880]) model.layers.18.self_attn.k_proj.bias: torch.Size([512]) model.layers.18.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.18.self_attn.o_proj.bias: torch.Size([2880]) model.layers.18.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.18.self_attn.q_proj.bias: torch.Size([4096]) model.layers.18.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.18.self_attn.sinks: torch.Size([64]) model.layers.18.self_attn.v_proj.bias: torch.Size([512]) model.layers.18.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.19.input_layernorm.weight: torch.Size([2880]) model.layers.19.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.19.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.19.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.19.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.19.mlp.router.bias: torch.Size([32]) model.layers.19.mlp.router.weight: torch.Size([32, 2880]) model.layers.19.post_attention_layernorm.weight: torch.Size([2880]) model.layers.19.self_attn.k_proj.bias: torch.Size([512]) model.layers.19.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.19.self_attn.o_proj.bias: torch.Size([2880]) model.layers.19.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.19.self_attn.q_proj.bias: torch.Size([4096]) model.layers.19.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.19.self_attn.sinks: torch.Size([64]) model.layers.19.self_attn.v_proj.bias: torch.Size([512]) model.layers.19.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.20.mlp.router.bias: torch.Size([32]) model.layers.20.mlp.router.weight: torch.Size([32, 2880]) model.layers.20.self_attn.k_proj.bias: torch.Size([512]) model.layers.20.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.20.self_attn.o_proj.bias: torch.Size([2880]) model.layers.20.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.20.self_attn.q_proj.bias: torch.Size([4096]) model.layers.20.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.20.self_attn.sinks: torch.Size([64]) model.layers.20.self_attn.v_proj.bias: torch.Size([512]) model.layers.20.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00008-of-00009.safetensors: model.layers.20.input_layernorm.weight: torch.Size([2880]) model.layers.20.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.20.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.20.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.20.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.20.post_attention_layernorm.weight: torch.Size([2880]) model.layers.21.input_layernorm.weight: torch.Size([2880]) model.layers.21.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.21.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.21.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.21.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.21.mlp.router.bias: torch.Size([32]) model.layers.21.mlp.router.weight: torch.Size([32, 2880]) model.layers.21.post_attention_layernorm.weight: torch.Size([2880]) model.layers.21.self_attn.k_proj.bias: torch.Size([512]) model.layers.21.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.21.self_attn.o_proj.bias: torch.Size([2880]) model.layers.21.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.21.self_attn.q_proj.bias: torch.Size([4096]) model.layers.21.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.21.self_attn.sinks: torch.Size([64]) model.layers.21.self_attn.v_proj.bias: torch.Size([512]) model.layers.21.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.22.input_layernorm.weight: torch.Size([2880]) model.layers.22.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.22.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.22.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.22.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.22.mlp.router.bias: torch.Size([32]) model.layers.22.mlp.router.weight: torch.Size([32, 2880]) model.layers.22.post_attention_layernorm.weight: torch.Size([2880]) model.layers.22.self_attn.k_proj.bias: torch.Size([512]) model.layers.22.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.22.self_attn.o_proj.bias: torch.Size([2880]) model.layers.22.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.22.self_attn.q_proj.bias: torch.Size([4096]) model.layers.22.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.22.self_attn.sinks: torch.Size([64]) model.layers.22.self_attn.v_proj.bias: torch.Size([512]) model.layers.22.self_attn.v_proj.weight: torch.Size([512, 2880]) model.layers.23.mlp.router.bias: torch.Size([32]) model.layers.23.mlp.router.weight: torch.Size([32, 2880]) model.layers.23.self_attn.k_proj.bias: torch.Size([512]) model.layers.23.self_attn.k_proj.weight: torch.Size([512, 2880]) model.layers.23.self_attn.o_proj.bias: torch.Size([2880]) model.layers.23.self_attn.o_proj.weight: torch.Size([2880, 4096]) model.layers.23.self_attn.q_proj.bias: torch.Size([4096]) model.layers.23.self_attn.q_proj.weight: torch.Size([4096, 2880]) model.layers.23.self_attn.sinks: torch.Size([64]) model.layers.23.self_attn.v_proj.bias: torch.Size([512]) model.layers.23.self_attn.v_proj.weight: torch.Size([512, 2880])

Contents of model-00009-of-00009.safetensors: lm_head.weight: torch.Size([201088, 2880]) model.layers.23.input_layernorm.weight: torch.Size([2880]) model.layers.23.mlp.experts.down_proj: torch.Size([32, 2880, 2880]) model.layers.23.mlp.experts.down_proj_bias: torch.Size([32, 2880]) model.layers.23.mlp.experts.gate_up_proj: torch.Size([32, 2880, 5760]) model.layers.23.mlp.experts.gate_up_proj_bias: torch.Size([32, 5760]) model.layers.23.post_attention_layernorm.weight: torch.Size([2880]) model.norm.weight: torch.Size([2880])

NicholasGuerrero avatar Nov 21 '25 20:11 NicholasGuerrero

The logs are a bit verbose, so please let me know how I can of assistance. I'm finding many bugs with the behavior of the gpt-oss axolotl training

NicholasGuerrero avatar Nov 25 '25 00:11 NicholasGuerrero