DeepSpeed
DeepSpeed copied to clipboard
[BUG] ZeRO is unsupported in init_inference
Describe the bug According to https://deepspeed.readthedocs.io/en/stable/inference-init.html#deepspeed.inference.config.DeepSpeedInferenceConfig.zero, ZeRO is supported in init_inference. However, if I define ZeRO configuration on init_inference, there is no difference.
To Reproduce
Call init_inference like so:
zero_config = deepspeed.runtime.zero.config.DeepSpeedZeroConfig(
stage=3,
offload_param={"device" : 'cpu',},
)
self.model = deepspeed.init_inference(
self.model,
zero=zero_config,
)
With a model that will not fit on GPU memory.
Expected behavior I expected ZeRO offload to work
ds_report output
ds_report
Setting ds_accelerator to cuda (auto detect)
Setting ds_accelerator to cuda (auto detect)
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-devel package with yum
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
utils .................. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/inference/lib/python3.9/site-packages/torch']
torch version .................... 1.12.1+cu116
deepspeed install path ........... ['/home/scratch.blong_sw/DeepSpeed/deepspeed']
deepspeed info ................... 0.9.3+52907a66, 52907a66, master
torch cuda version ............... 11.6
torch hip version ................ None
nvcc version ..................... 11.6
deepspeed wheel compiled w. ...... torch 1.12, cuda 11.6
Additional information
There is no handling of zero in InferenceEngine: https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/inference/engine.py#L106