stanford_alpaca icon indicating copy to clipboard operation
stanford_alpaca copied to clipboard

how to get throughput of training process ?

Open artetaout opened this issue 2 years ago • 0 comments

training log is

ng_rate': 5.263157894736842e-06, 'epoch': 0.02}                                                                                                                                 
{'loss': 1.2607, 'learning_rate': 6.31578947368421e-06, 'epoch': 0.03}                                                                                                                                  
{'loss': 1.2007, 'learning_rate': 7.368421052631579e-06, 'epoch': 0.03}                                                                                                                                 
{'loss': 1.1451, 'learning_rate': 8.421052631578948e-06, 'epoch': 0.04}                                                                                                                                 
{'loss': 1.1491, 'learning_rate': 9.473684210526315e-06, 'epoch': 0.04}                                                                                                                                 
{'loss': 1.1431, 'learning_rate': 1.0526315789473684e-05, 'epoch': 0.05}                                                                                                                                
{'loss': 1.1513, 'learning_rate': 1.1578947368421053e-05, 'epoch': 0.05}                                                                                                                                
{'loss': 1.1783, 'learning_rate': 1.263157894736842e-05, 'epoch': 0.06}                                                                                                                                 
{'loss': 1.1814, 'learning_rate': 1.3684210526315791e-05, 'epoch': 0.06}                                                                                                                                
{'loss': 1.1815, 'learning_rate': 1.4736842105263159e-05, 'epoch': 0.07}                                                                                                                                
{'loss': 1.1349, 'learning_rate': 1.578947368421053e-05, 'epoch': 0.07}                                                                                                                                 
{'loss': 1.1993, 'learning_rate': 1.6842105263157896e-05, 'epoch': 0.08}                                                                                                                                
{'loss': 1.0783, 'learning_rate': 1.7894736842105264e-05, 'epoch': 0.08}                                                                                                                                
{'loss': 1.0883, 'learning_rate': 1.894736842105263e-05, 'epoch': 0.09}                                                                                                                                 
{'loss': 1.127, 'learning_rate': 2e-05, 'epoch': 0.09}                                                                                                                                                  
  3%|█████                                                                                                                                                             | 19/609 [01:26<37:47,  3.84s/it

how to get throughput data ? my running command is

WANDB_MODE=disabled torchrun --nproc_per_node=8 --master_port=9999 train.py     --model_name_or_path /workspace/llama-7b-hf/     --data_path ./alpaca_data.json     --bf16 True     --output_dir output     --num_train_epochs 3     --per_device_train_batch_size 4     --per_device_eval_batch_size 4     --gradient_accumulation_steps 8     --evaluation_strategy "no"     --save_strategy "steps"     --save_steps 2000     --save_total_limit 1     --learning_rate 2e-5     --weight_decay 0.     --warmup_ratio 0.03     --lr_scheduler_type "cosine"     --logging_steps 1     --tf32 True     --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer'     --fsdp "full_shard auto_wrap"

artetaout avatar Apr 25 '23 12:04 artetaout