Megatron-LM icon indicating copy to clipboard operation
Megatron-LM copied to clipboard

Huggingface <-> Megatron-LM Compatibility

Open usuyama opened this issue 5 years ago • 25 comments

Looking for a way to convert model weights between huggingface and Megatron-LM. (1): Continual pretraining from pretrained weights from huggingface (2): Convert Megatron-LM model weights to huggingface

It shouldn't be too difficult to adjust layer names/weights, but I'm hoping someone has already done this.

Related #3 (already closed but couldn't find the solution)

usuyama avatar Jul 06 '20 07:07 usuyama

hmm it seems not so straight forward to convert to huggingface format. At least, I think LayerNorms locations don't match.

Megatron-LM model structure:

BertModel(
  (language_model): TransformerLanguageModel(
    (embedding): Embedding(
      (word_embeddings): VocabParallelEmbedding()
      (position_embeddings): Embedding(512, 768)
      (tokentype_embeddings): Embedding(2, 768)
      (embedding_dropout): Dropout(p=0.1, inplace=False)
    )
    (transformer): ParallelTransformer(
      (layers): ModuleList(
        (0): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (1): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (2): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (3): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (4): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (5): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (6): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (7): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (8): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (9): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (10): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
      )
      (final_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
    )
    (pooler): Pooler(
      (dense): Linear(in_features=768, out_features=768, bias=True)
    )
  )
  (lm_head): BertLMHead(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
  )
  (binary_head): Linear(in_features=768, out_features=2, bias=True)
)

usuyama avatar Aug 20 '20 17:08 usuyama

for reference, huggingface BertModel

BertModel(
  (embeddings): BertEmbeddings(
    (word_embeddings): Embedding(30522, 768, padding_idx=0)
    (position_embeddings): Embedding(512, 768)
    (token_type_embeddings): Embedding(2, 768)
    (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
    (dropout): Dropout(p=0.1, inplace=False)
  )
  (encoder): BertEncoder(
    (layer): ModuleList(
      (0): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (1): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (2): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (3): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (4): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (5): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (6): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (7): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (8): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (9): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (10): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )
    )
  )
  (pooler): BertPooler(
    (dense): Linear(in_features=768, out_features=768, bias=True)
    (activation): Tanh()
  )
)

usuyama avatar Aug 20 '20 17:08 usuyama

Any thoughts/advice? @jaredcasper @PyxAI @harkous @raulpuric

usuyama avatar Aug 20 '20 17:08 usuyama

Any updates?

Beomi avatar Oct 29 '20 01:10 Beomi

Was interested in the same questions, @usuyama. See excerpt from Megatron paper. Does look like Megatron<->HF will require some updates on HF side. image

vdabravolski avatar Nov 19 '20 00:11 vdabravolski

Thanks, @vdabravolski

Need to check the forward function for details, but the order of weights looks different as you pointed out.

Megatron-LM

        (11): ParallelTransformerLayer(
          (input_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (attention): ParallelSelfAttention(
            (query_key_value): ColumnParallelLinear()
            (attention_dropout): Dropout(p=0.1, inplace=False)
            (dense): RowParallelLinear()
            (output_dropout): Dropout(p=0.1, inplace=False)
          )
          (post_attention_layernorm): FusedLayerNorm(torch.Size([768]), eps=1e-05, elementwise_affine=True)
          (mlp): ParallelMLP(
            (dense_h_to_4h): ColumnParallelLinear()
            (dense_4h_to_h): RowParallelLinear()
            (dropout): Dropout(p=0.1, inplace=False)
          )

HuggingFace

      (11): BertLayer(
        (attention): BertAttention(
          (self): BertSelfAttention(
            (query): Linear(in_features=768, out_features=768, bias=True)
            (key): Linear(in_features=768, out_features=768, bias=True)
            (value): Linear(in_features=768, out_features=768, bias=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
          (output): BertSelfOutput(
            (dense): Linear(in_features=768, out_features=768, bias=True)
            (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
            (dropout): Dropout(p=0.1, inplace=False)
          )
        )
        (intermediate): BertIntermediate(
          (dense): Linear(in_features=768, out_features=3072, bias=True)
        )
        (output): BertOutput(
          (dense): Linear(in_features=3072, out_features=768, bias=True)
          (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
          (dropout): Dropout(p=0.1, inplace=False)
        )
      )

usuyama avatar Nov 25 '20 02:11 usuyama

I have the same question. Any new update?

amirj avatar Dec 16 '20 15:12 amirj

Curious about this too – I have a GPT2 model trained with Megatron and would love to get it imported into HF.

moyix avatar Feb 07 '21 16:02 moyix

In order to convert the Megatron GPT2 model to HF(huggingface transformers) GPT2, a layer level parameter conversion was performed and verification was conducted, but the conversion was not performed properly.

The following is the core concept of transformation.

Megatron GPT2 transformer layer and shape

layers.0.input_layernorm.weight, shape: torch.Size([1920])
layers.0.input_layernorm.bias, shape: torch.Size([1920])
layers.0.attention.query_key_value.weight, shape: torch.Size([5760, 1920])  # need transpose
layers.0.attention.query_key_value.bias, shape: torch.Size([5760])
layers.0.attention.dense.weight, shape: torch.Size([1920, 1920])
layers.0.attention.dense.bias, shape: torch.Size([1920])
layers.0.post_attention_layernorm.weight, shape: torch.Size([1920])
layers.0.post_attention_layernorm.bias, shape: torch.Size([1920])
layers.0.mlp.dense_h_to_4h.weight, shape: torch.Size([7680, 1920])   # need transpose
layers.0.mlp.dense_h_to_4h.bias, shape: torch.Size([7680])
layers.0.mlp.dense_4h_to_h.weight, shape: torch.Size([1920, 7680])  # need transpose
layers.0.mlp.dense_4h_to_h.bias, shape: torch.Size([1920])

HF GPT2 transformer layer and shape

transformer.h.0.ln_1.weight, shape: torch.Size([1920])
transformer.h.0.ln_1.bias, shape: torch.Size([1920])
transformer.h.0.attn.bias, shape: torch.Size([1, 1, 1920, 1920])
transformer.h.0.attn.masked_bias, shape: torch.Size([])
transformer.h.0.attn.c_attn.weight, shape: torch.Size([1920, 5760])
transformer.h.0.attn.c_attn.bias, shape: torch.Size([5760])
transformer.h.0.attn.c_proj.weight, shape: torch.Size([1920, 1920])
transformer.h.0.attn.c_proj.bias, shape: torch.Size([1920])
transformer.h.0.ln_2.weight, shape: torch.Size([1920])
transformer.h.0.ln_2.bias, shape: torch.Size([1920])
transformer.h.0.mlp.c_fc.weight, shape: torch.Size([1920, 7680])
transformer.h.0.mlp.c_fc.bias, shape: torch.Size([7680])
transformer.h.0.mlp.c_proj.weight, shape: torch.Size([7680, 1920])
transformer.h.0.mlp.c_proj.bias, shape: torch.Size([1920])

In the case of attn.bias and masked_bias, they were the same as the values ​​implemented in Megatron GPT2, so they were ignored during conversion and all parameters were converted, but the generated results of HF GPT2 were different from those of Megatron GPT2.

I guess HF GPT2 and Megatron GPT2 have some different layer level implementation. If you have any ideas on this part, please let me know.

haven-jeon avatar Feb 10 '21 04:02 haven-jeon

As @vdabravolski pointed out, Megatron rearranged LayerNorm and residual connection in the transformer block. Maybe that's one difference you observed, @haven-jeon ?

usuyama avatar Feb 10 '21 08:02 usuyama

@usuyama, thanks for reminding me. I thought it was a part related to BERT in the paper, but looking at the Megatron-LM code, it seems to be the code shared with GPT2.

https://github.com/NVIDIA/Megatron-LM/blob/1b3dfa2ff9fe1643e15ddd1cf775abcdb2146f13/megatron/model/transformer.py#L445 This part looks different from the HF transformers. 🤔

haven-jeon avatar Feb 10 '21 13:02 haven-jeon

Any news on this issue?

malteos avatar Mar 22 '22 18:03 malteos

Any news on this?

Symbolk avatar Aug 05 '22 03:08 Symbolk

Have not tried it but this exists: https://github.com/huggingface/transformers/tree/main/src/transformers/models/megatron_gpt2

chrisby avatar Jan 18 '23 14:01 chrisby

Marking as stale. No activity in 60 days. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Jul 10 '23 18:07 github-actions[bot]

Marking as stale. No activity in 60 days.

github-actions[bot] avatar Sep 19 '23 18:09 github-actions[bot]

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.

devymex avatar Nov 12 '23 10:11 devymex

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

TheRootOf3 avatar Dec 11 '23 22:12 TheRootOf3

Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL>

Save llama-2 checkpoint as HuggingFace to Megatron-LM:

Step 1. Download this file to Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

Step 3. Test

from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrain(<SAVE_DIR>)

Works perfectly for me. Just changed --loader=llama2-hf to --loader=megatron since we want to convert Megatron checkpoint to hf

CaesarWWK avatar Dec 14 '23 07:12 CaesarWWK

Hi, are there any updates? I'm mostly interested in converting GPT-2/Bloom checkpoints.

They have script for converting GPT-2 somewhere in hf's repo transformers/models/megatron-gpt2 https://huggingface.co/docs/transformers/model_doc/megatron_gpt2

Otherwise it should be in somewhere in Megatron's repo.

CaesarWWK avatar Dec 14 '23 07:12 CaesarWWK

Marking as stale. No activity in 60 days.

github-actions[bot] avatar Feb 23 '24 18:02 github-actions[bot]

Could you provide guidance on how to consolidate the weights of a module—specifically, ParallelMLP and Parallel Attention—into a PyTorch-compatible format? I am utilizing a tensor-parallel size greater than 1, which results in the module's parameters being distributed across different ranks. How can I aggregate these to obtain the complete set of model weights?

chenfengshijie avatar Feb 28 '24 04:02 chenfengshijie

1. ラマ-2 を HuggingFace から Megatron-LM に変換します。

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. ラマ-2 を Megatron-LM から HuggingFace に変換します。

ステップ 1. このPython スクリプトをダウンロードし、次の場所に保存します。Megatron-LM/tools/checkpoint/saver_llama2_hf.py

ステップ 2. 変換を実行する

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

ただし、LLaMA-2 を MGT から HF に変換する前に、トレーニング プロセス中に MGT の次のパラメータが HF と同じデフォルト値に設定されていることを確認する必要があります。

  1. セット--norm-epsilon=1e-6
  2. 有効にしないでください--apply-query-key-layer-scaling(または--no-query-key-layer-scaling古いバージョンでは有効にします)。
  3. カスタムのattention_maskもposition_idsも、トレーニング中のMGTのGPTモデルでは効果がありません。
  4. 有効にする--disable-bias-linear

Does this conversion script support GQA?

sudy-super avatar Mar 04 '24 10:03 sudy-super

Marking as stale. No activity in 60 days.

github-actions[bot] avatar May 03 '24 18:05 github-actions[bot]

I found a script in transformers https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py anyone tried this before? It seems to convert a gpt2 from Megatron format to huggingface format

babu111 avatar May 15 '24 09:05 babu111

1. Convert llama-2 from HuggingFace to Megatron-LM:

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --loader=llama2_hf --load-dir=<HF_MODEL_DIR> --save-dir=<SAVE_DIR> --tokenizer-model=<TOKENIZER_MODEL_FILE>

2. Convert llama-2 from Megatron-LM to HuggingFace:

Step 1. Download this python script and save into Megatron-LM/tools/checkpoint/saver_llama2_hf.py

Step 2. Do the conversion

PYTHONPATH=$(pwd) tools/checkpoint/util.py --model-type=GPT --saver=llama2_hf --load-dir=<MEGATRON_CHECKPOINT_DIR> --save-dir=<SAVE_DIR>

But before converting LLaMA-2 from MGT to HF, you need to ensure that following parameters in MGT are set to the same default values as in HF during your trainning process:

  1. Set --norm-epsilon=1e-6,
  2. Do not enable --apply-query-key-layer-scaling (or enable --no-query-key-layer-scaling in older versions),
  3. Neither custom attention_mask nor position_ids takes effect in MGT's GPT models in trainning,
  4. Enable --disable-bias-linear.

Is this support GQA

JiwenJ avatar May 31 '24 03:05 JiwenJ

Marking as stale. No activity in 60 days.

github-actions[bot] avatar Aug 07 '24 18:08 github-actions[bot]