cookbook icon indicating copy to clipboard operation
cookbook copied to clipboard

Add MLP Linears Argument

Open Quentin-Anthony opened this issue 1 year ago • 0 comments

Addresses https://github.com/EleutherAI/cookbook/issues/36

Before:

$ python calc/calc_transformer_mem.py --infer --high-prec-b
ytes-per-val 4 --low-prec-bytes-per-val 1 --num-gpus 2 --zero-stage 3 -ca -b 1 -s 1024 -v 152
064 -hs 8192 -a 64 -l 80 -kv 1 -ff 3

Calculating memory with training configuration: {'num_gpus': 2, 'tensor_parallel_size': 1, 'pipeline_parallel_size': 1, 'partition_activations': False, 'zero_stage': 3, 'zero_allgather_bucket_size': 500000000.0, 'zero3_max_live_params': 1000000000.0, 'checkpoint_activations': True, 'batch_size_per_gpu': 1, 'sequence_length': 1024, 'vocab_size': 152064, 'hidden_size': 8192, 'num_attention_heads': 64, 'num_layers': 80, 'ffn_expansion_factor': 3, 'num_mlp_linears': 2, 'infer': True, 'kv_size_ratio': 1.0, 'is_mixed_precision': True, 'high_prec_bytes_per_val': 4, 'low_prec_bytes_per_val': 1, 'bytes_per_grad_ele': 4, 'num_experts': 0, 'expert_parallelism': 1, 'misc_mem_gib': 0}

Number of Parameters: 56.19 B

*** Per-GPU Memory
Per-GPU Activation Memory: 0.14 GiB
Per-GPU Model Memory: 26.17 GiB
Per-GPU KV Cache Memory: 1.25 GiB

Per-GPU Memory Required for Inference: 27.56 GiB

*** Total GPU Memory for a Single Model Replica
Total Activation Memory: 0.14 GiB
Total Model Memory: 52.33 GiB
Total KV Cache Memory: 2.50 GiB

Total GPU Memory Required to Store a Complete Model Replica for Inference: 54.97 GiB

After:

$ python calc/calc_transformer_mem.py --infer --high-prec-bytes-per-val 4 --low-prec-bytes-per-val 1 -nl 3 --num-gpus 2 --zero-stage 3 -ca -b 1 -s 1024 
-v 152064 -hs 8192 -a 64 -l 80 -kv 1 -ff 3

Calculating memory with training configuration: {'num_gpus': 2, 'tensor_parallel_size': 1, 'pipeline_parallel_size': 1, 'partition_activations': False, 'zero_stage': 3, 'zero_allgather_bucket_size': 500000000.0, 'zero3_max_live_params': 1000000000.0, 'checkpoint_activations': True, 'batch_size_per_gpu': 1, 'sequence_length': 1024, 'vocab_size': 152064, 'hidden_size': 8192, 'num_attention_heads': 64, 'num_layers': 80, 'ffn_expansion_factor': 3, 'num_mlp_linears': 3, 'infer': True, 'kv_size_ratio': 1.0, 'is_mixed_precision': True, 'high_prec_bytes_per_val': 4, 'low_prec_bytes_per_val': 1, 'bytes_per_grad_ele': 4, 'num_experts': 0, 'expert_parallelism': 1, 'misc_mem_gib': 0}

Number of Parameters: 72.3 B

*** Per-GPU Memory
Per-GPU Activation Memory: 0.14 GiB
Per-GPU Model Memory: 33.67 GiB
Per-GPU KV Cache Memory: 1.25 GiB

Per-GPU Memory Required for Inference: 35.06 GiB

*** Total GPU Memory for a Single Model Replica
Total Activation Memory: 0.14 GiB
Total Model Memory: 67.33 GiB
Total KV Cache Memory: 2.50 GiB

Total GPU Memory Required to Store a Complete Model Replica for Inference: 69.97 GiB

Also cleaned up args indentation.

Quentin-Anthony avatar Apr 06 '24 15:04 Quentin-Anthony