Megatron-LM
Megatron-LM copied to clipboard
Huggingface <-> Megatron-LM Compatibility
Looking for a way to convert model weights between huggingface and Megatron-LM. (1): Continual pretraining from pretrained weights from huggingface (2): Convert Megatron-LM model weights to huggingface
It shouldn't be too difficult to adjust layer names/weights, but I'm hoping someone has already done this.
Related #3 (already closed but couldn't find the solution)
hmm it seems not so straight forward to convert to huggingface format. At least, I think LayerNorms locations don't match.
Megatron-LM model structure:
BertModel(
(language_model): TransformerLanguageModel(
(embedding): Embedding(
(word_embeddings): VocabParallelEmbedding()
(position_embeddings): Embedding(512, 768)
(tokentype_embeddings): Embedding(2, 768)
(embedding_dropout): Dropout(p=0.1, inplace=False)
)
(transformer): ParallelTransformer(
(layers): ModuleList(
(0): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
(final_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
)
(pooler): Pooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
)
)
(lm_head): BertLMHead(
(dense): Linear(in_features=768, out_features=768, bias=True)
(layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
)
(binary_head): Linear(in_features=768, out_features=2, bias=True)
)
for reference, huggingface BertModel
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(1): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(2): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(3): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(4): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(5): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(6): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(7): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(8): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(9): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(10): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
Any thoughts/advice? @jaredcasper @PyxAI @harkous @raulpuric
Any updates?
Was interested in the same questions, @usuyama. See excerpt from Megatron paper. Does look like Megatron<->HF will require some updates on HF side.

Thanks, @vdabravolski
Need to check the forward function for details, but the order of weights looks different as you pointed out.
Megatron-LM
(11): ParallelTransformerLayer(
(input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(attention): ParallelSelfAttention(
(query_key_value): ColumnParallelLinear()
(attention_dropout): Dropout(p=0.1, inplace=False)
(dense): RowParallelLinear()
(output_dropout): Dropout(p=0.1, inplace=False)
)
(post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
(mlp): ParallelMLP(
(dense_h_to_4h): ColumnParallelLinear()
(dense_4h_to_h): RowParallelLinear()
(dropout): Dropout(p=0.1, inplace=False)
)
HuggingFace
(11): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
I have the same question. Any new update?
Curious about this too – I have a GPT2 model trained with Megatron and would love to get it imported into HF.
In order to convert the Megatron GPT2 model to HF(huggingface transformers) GPT2, a layer level parameter conversion was performed and verification was conducted, but the conversion was not performed properly.
The following is the core concept of transformation.
Megatron GPT2 transformer layer and shape
layers.0.input_layernorm.weight, shape: torch.Size([1920])
layers.0.input_layernorm.bias, shape: torch.Size([1920])
layers.0.attention.query_key_value.weight, shape: torch.Size([5760, 1920]) # need transpose
layers.0.attention.query_key_value.bias, shape: torch.Size([5760])
layers.0.attention.dense.weight, shape: torch.Size([1920, 1920])
layers.0.attention.dense.bias, shape: torch.Size([1920])
layers.0.post_attention_layernorm.weight, shape: torch.Size([1920])
layers.0.post_attention_layernorm.bias, shape: torch.Size([1920])
layers.0.mlp.dense_h_to_4h.weight, shape: torch.Size([7680, 1920]) # need transpose
layers.0.mlp.dense_h_to_4h.bias, shape: torch.Size([7680])
layers.0.mlp.dense_4h_to_h.weight, shape: torch.Size([1920, 7680]) # need transpose
layers.0.mlp.dense_4h_to_h.bias, shape: torch.Size([1920])
HF GPT2 transformer layer and shape
transformer.h.0.ln_1.weight, shape: torch.Size([1920])
transformer.h.0.ln_1.bias, shape: torch.Size([1920])
transformer.h.0.attn.bias, shape: torch.Size([1, 1, 1920, 1920])
transformer.h.0.attn.masked_bias, shape: torch.Size([])
transformer.h.0.attn.c_attn.weight, shape: torch.Size([1920, 5760])
transformer.h.0.attn.c_attn.bias, shape: torch.Size([5760])
transformer.h.0.attn.c_proj.weight, shape: torch.Size([1920, 1920])
transformer.h.0.attn.c_proj.bias, shape: torch.Size([1920])
transformer.h.0.ln_2.weight, shape: torch.Size([1920])
transformer.h.0.ln_2.bias, shape: torch.Size([1920])
transformer.h.0.mlp.c_fc.weight, shape: torch.Size([1920, 7680])
transformer.h.0.mlp.c_fc.bias, shape: torch.Size([7680])
transformer.h.0.mlp.c_proj.weight, shape: torch.Size([7680, 1920])
transformer.h.0.mlp.c_proj.bias, shape: torch.Size([1920])
In the case of attn.bias and masked_bias, they were the same as the values implemented in Megatron GPT2, so they were ignored during conversion and all parameters were converted, but the generated results of HF GPT2 were different from those of Megatron GPT2.
I guess HF GPT2 and Megatron GPT2 have some different layer level implementation. If you have any ideas on this part, please let me know.
As @vdabravolski pointed out, Megatron rearranged LayerNorm and residual connection in the transformer block. Maybe that's one difference you observed, @haven-jeon ?
@usuyama, thanks for reminding me. I thought it was a part related to BERT in the paper, but looking at the Megatron-LM code, it seems to be the code shared with GPT2.
https://github.com/NVIDIA/Megatron-LM/blob/1b3dfa2ff9fe1643e15ddd1cf775abcdb2146f13/megatron/model/transformer.py#L445 This part looks different from the HF transformers. 🤔
Any news on this issue?
Any news on this?
Have not tried it but this exists: https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2
Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.
Marking as stale. No activity in 60 days.
1. Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>
2. Convert llama-2 from Megatron-LM to HuggingFace:
Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py
Step 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>
But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:
- Set
--norm-epsilon=1e-6, - Do not enable
--apply-query-key-layer-scaling(or enable--no-query-key-layer-scalingin older versions), - Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
- Enable
--disable-bias-linear.
Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.
Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL>Save llama-2 checkpoint as HuggingFace to Megatron-LM:
Step 1. Download this file to Megatron-LM/tools/checkpoint/saver_llama2_hf.py
Step 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>Step 3. Test
from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrain(<SAVE_DIR>)
Works perfectly for me. Just changed --loader=llama2-hf to --loader=megatron since we want to convert Megatron checkpoint to hf
Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.
They have script for converting GPT-2 somewhere in hf's repo transformers/models/megatron-gpt2 https://huggingface.co/docs/transformers/model_doc/megatron_gpt2
Otherwise it should be in somewhere in Megatron's repo.
Marking as stale. No activity in 60 days.
Could you provide guidance on how to consolidate the weights of a module—specifically, ParallelMLP and Parallel Attention—into a PyTorch-compatible format? I am utilizing a tensor-parallel size greater than 1, which results in the module's parameters being distributed across different ranks. How can I aggregate these to obtain the complete set of model weights?
1. ラマ-2 を HuggingFace から Megatron-LM に変換します。
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>2. ラマ-2 を Megatron-LM から HuggingFace に変換します。
ステップ 1. このPython スクリプトをダウンロードし、次の場所に保存します。
Megatron-LM/tools/checkpoint/saver_llama2_hf.pyステップ 2. 変換を実行する
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>ただし、LLaMA-2 を MGT から HF に変換する前に、トレーニング プロセス中に MGT の次のパラメータが HF と同じデフォルト値に設定されていることを確認する必要があります。
- セット
--norm-epsilon=1e-6、- 有効にしないでください
--apply-query-key-layer-scaling(または--no-query-key-layer-scaling古いバージョンでは有効にします)。- カスタムのattention_maskもposition_idsも、トレーニング中のMGTのGPTモデルでは効果がありません。
- 有効にする
--disable-bias-linear。
Does this conversion script support GQA?
Marking as stale. No activity in 60 days.
I found a script in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py anyone tried this before? It seems to convert a gpt2 from Megatron format to huggingface format
1. Convert llama-2 from HuggingFace to Megatron-LM:
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>2. Convert llama-2 from Megatron-LM to HuggingFace:
Step 1. Download this python script and save into
Megatron-LM/tools/checkpoint/saver_llama2_hf.pyStep 2. Do the conversion
PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:
- Set
--norm-epsilon=1e-6,- Do not enable
--apply-query-key-layer-scaling(or enable--no-query-key-layer-scalingin older versions),- Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
- Enable
--disable-bias-linear.
Is this support GQA
Marking as stale. No activity in 60 days.