DeepSpeedExamples
DeepSpeedExamples copied to clipboard
[step1_supervised_finetuning] run_chinese.sh error with deepspeed config
When I run a script bash training_scripts/other_language/run_chinese.sh
, I encounter a problem.
Traceback (most recent call last):
File "xxx/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step1_supervised_finetuning/main.py", line 339, in <module>
main()
File "xxx/DeepSpeedExamples/applications/DeepSpeed-Chat/training/step1_supervised_finetuning/main.py", line 284, in main
model, optimizer, _, lr_scheduler = deepspeed.initialize(
File "/home/admin/miniconda3/envs/DS_chat/lib/python3.9/site-packages/deepspeed/__init__.py", line 137, in initialize
assert config is None, "Not sure how to proceed, we were given deepspeed configs in the deepspeed arguments and deepspeed.initialize() function call"
AssertionError: Not sure how to proceed, we were given deepspeed configs in the deepspeed arguments and deepspeed.initialize() function call
The prompt message seems to say that there is a problem with my deepspeed config, but I don't know what my problem is. Below is my deepspeed config json.
{
"bfloat16": {
"enabled": "auto"
},
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 1
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"steps_per_print": 1000
}
If I remove --deepspeed_config ./ds_config.json \
this line in sh, it seems to work fine.
Can someone help explain? Or kindly provide a correct DeepSpeed Config?