accelerate
accelerate copied to clipboard
Accelerate + DeepSpeed
System Info
all is the latest
Information
- [ ] The official example scripts
- [X] My own modified scripts
Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported
no_trainerscript in theexamplesfolder of thetransformersrepo (such asrun_no_trainer_glue.py) - [ ] My own task or dataset (give details below)
Reproduction
I made deepspeed config by accelerate config
I have treid to train the 4-bit quantized model (bitsandbytes) by using deepspeed zero2 or 3 (I tried a lot for each stage 2 3)
However, there is always happening: "ValueError: .to is not supported for 4-bit or 8-bit models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct dtype"
Accelerate's deepspeed config and bitsandbytes are not compatible???
How to solve??
Expected behavior
I want to train the 4-bit quantized model (bitsandbytes) by using deepspeed zero 2 or 3
Can you give us a full reproducer? I believe that should be confirmed to work (verifying 100% soon/shortly) and it may be something up with the code, but first it'd be good to have a full reproducer
You can check with the docs here: https://huggingface.co/docs/peft/accelerate/deepspeed#compatibility-with-bitsandbytes-quantization--lora
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.