bitsandbytes icon indicating copy to clipboard operation
bitsandbytes copied to clipboard

Support 8bit Optimizers on CPU

Open fzyzcjy opened this issue 1 year ago • 3 comments

Feature request

Hi thanks for the library! It would be great if the optimizers can be run on CPU. For example, I would like to try adamw_8bit to full-finetune a 8B model on a 24GB GPU card (RTX4090). With deepspeed offload, the GPU memory is OK, but the CPU memory requirement is still very huge, partially because it uses normal adamw, thus needs 8x8=64GB for the optimizer itself.

This package creates the super helpful adamw_8bit, thus I would appreciate it if it can be used with the settings above, hopefully reducing 64GB to 8x2=16GB for optimizer state.

Motivation

(see above)

Your contribution

Yes

fzyzcjy avatar Oct 25 '24 01:10 fzyzcjy

and iam

werruww avatar Oct 26 '24 21:10 werruww

See #1021. I proposed that this should be a step on the path of implementing cross platform support (especially Apple Silicon, as CUDA and Apple Silicon won't run on the same hardware, which makes validation complicated)

rickardp avatar Nov 18 '24 09:11 rickardp

run 4 bit on cpu only how

werruww avatar Feb 23 '25 05:02 werruww

This appears to be needed for accelerate as PyTorch 2.9.1 (at time of writing) currently does not support operators such as bitsandbytes::optimizer_update_8bit_blockwise on neither MPS nor CPU (with PYTORCH_ENABLE_MPS_FALLBACK=1 set).

djtuBIG-MaliceX avatar Dec 25 '25 05:12 djtuBIG-MaliceX