optimum
optimum copied to clipboard
Optimum's DeBERTa-V2 behavior strange when training with ORT (training hangs or takes impossibly long)
System Info
Running with CUDA 11.5, Python 3.8, and torch 11.11. I installed the Python dependencies from requirements.txt in the text-classification example folder. I installed transformers from source, and tried running with Optimum from source as well as pip installing Optimum, and got the same results for both.
Running in Ubuntu image on a VM with 8 V100 GPUs.
Who can help?
@JingyaHuang @echarlaix
Information
- [X] The official example scripts
- [ ] My own modified scripts
Tasks
- [ ] An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below)
Reproduction
After properly setting up the environment, I run the following:
python -m torch.distributed.run --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge --task_name MRPC --do_train --max_seq_length 128 --per_device_train_batch_size 1 --learning_rate 3e-6 --max_steps 8000 --output_dir /tmp/deberta_res --overwrite_output_dir --logging_steps 8000 --fp16 --sharded_ddp simple --num_train_epochs 1
It downloads & tokenizes the dataset, then when it begins setting up ONNX/gets to the line that trains the ORTTrainer, it hangs for around 7 minutes 40 seconds (give or take 5 seconds) with no terminal output and GPU utilization at 0. After that wait, it continues as per usual, but trains very slowly and with a lot of terminal output logs about the ONNX graph. The terminal output is being printed so fast that it's hard for me to read the messages and there's no status bar visible for training progress. I let it train for over 4 days, and it still hadn't finished.
I ran the same arguments on the corresponding examples run_glue.py script from the Transformers repository without adding the Optimum ORTTrainer, and it finished training within an hour -- it also did not print out any terminal output beyond the expected status bars and warnings.
Finally, I tried modifying the examples run_glue.py script from the Transformers repository to add the Optimum ORTTrainer, and it printed a lot of terminal output with the ONNX graph information, such that the status bar if it was printed was obscured.
I did not run into any error messages, just strange behavior with the training hanging, the logs, and the unnaturally long training time.
Thanks for your time! Please let me know if I set up my environment incorrectly etc.
Expected behavior
Trains successfully -- I ran the corresponding examples run_glue.py script from the Transformers repository with the same arguments and it finished training within the hour.
I observe that in the implementation of DeBERTa in transformers, there are some numpy/math operations that led to incorrect export. See details here.
As the fairscale distributed (simple) works correctly with ORTTrainer for other models, I suspect that the unnormal training behaviors come from the fact that ONNX subgraphs are not correctly traced.
I will open a PR in transformers to correct this, and then check if this is the root of the issue.
Hi @carzh,
Some updates on the issue, the problem comes from the implementation of DeBERTas in transformers:
- The root cause of the failure is that
XDropOut
didn't have a symbolic function. And it is now implemented by @garymm in https://github.com/huggingface/transformers/pull/17502 and has just been merged to the main of transformers. - Another problem with the implementation of DeBERTa as I mentioned shall be fixed in this PR https://github.com/huggingface/transformers/pull/18272, this one fixes some problems that we encountered during the inference.
I just tested with the transformers after both fixes, and now the distributed training works for fp32, while failed for fp16 with the following error message:
RuntimeError: /onnxruntime_src/orttraining/orttraining/python/orttraining_pybind_state.cc:713 onnxruntime::python::addObjectMethodsForTraining(pybind11::module&, onnxruntime::python::ExecutionProviderRegistrationFn)::<lambda(onnxruntime::training::OrtModuleGraphBuilder*, const pybind11::bytes&, const onnxruntime::training::OrtModuleGraphBuilderConfiguration&)> [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (T) of Optype (MatMul) bound to different types (tensor(float) and tensor(float16) in node (MatMul_232).
It seems that the inputs of a MatMul
have mismatch dtype, which is quite similar to the previous problem that we met with the training of gpt2. I will continue on debugging it this week. And I currently put all fixes of DeBERTas here.
Hi @carzh, I just opened a PR in transformers to fix this issue. I tested it from my end, and it enables the distributed mixed-precision training with DeBERTas. Can you also test from your side by building transformers with this branch to check if it solves your issue? Thanks!
Thanks @JingyaHuang Adding @zhijxu-MS who can help verify this change.
@JingyaHuang @askhade this branch can run in my side.
Awesome, thanks for trying out @zhijxu-MS @askhade !
The fix has been merged in transformers, closing the issue.