stanford_alpaca
stanford_alpaca copied to clipboard
SFT Mistral;
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
torchrun --nproc_per_node=4 --master_port=8085 train.py --model_name_or_path ../models_hub/Mistral-7B-v0.3 --data_path ./alpaca_data.json --bf16 True --output_dir output --num_train_epochs 3 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 8 --evaluation_strategy "no" --save_strategy "steps" --save_steps 2000 --save_total_limit 1 --learning_rate 2e-5 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' --tf32 True
I also met this err... would you provide more details about your environment ?