DeepSpeedExamples icon indicating copy to clipboard operation
DeepSpeedExamples copied to clipboard

Confusion about Deepspeed Inference

Open ZekaiGalaxy opened this issue 11 months ago • 1 comments

Hi, I read the deepspeed docs and have the following confusion:

(1) What's the difference between these methods when in inferencing LLMs?

a. deepspeed.initialize and then write code to generate text

b. deepspeed.init_inference then write code to generate

c. use mii to inference

(2) Which of them are friendly for memory? For example, I want to inference 70b models, which of them support model parallelism that separates model parameters across gpus?

(3) For inference, what's the best practice now for inferencing 70b llama?

a. zero3 + cpu offload (1*a100)

b. zero3 (2*a100)

...

Thank you!

ZekaiGalaxy avatar Mar 25 '24 08:03 ZekaiGalaxy

Hello, did you find an answer?

teis-e avatar May 17 '24 14:05 teis-e