Olatunji Ruwase
Olatunji Ruwase
@YukinoshitaKaren, can you please try with single gpu?
> I have already allocate 50g memory, but still failed Can you explain what this means?
@Chevolier, can you please clarify the program you are referring to? It would be helpful to share what you are running and the expected output. Thanks!
@binderwang, can you please share the command line and stack trace of the OOM? Thanks!
@jfc4050, thanks for sharing this update. Are you doing NVMe offload as well similar to the original post?
@butterluo, unfortunately this thread went cold almost 2 years ago. The code has changed substantially. Can you please open a new issue and share your experience? Thanks!
Yes, it is possible to install in cpu-only environment. Currently, only CPUAdam and CPUAdagrad features are available. However, more features will be enabled some thanks to Intel [contributions](https://github.com/microsoft/DeepSpeed/pull/3041).
Earlier PRs https://github.com/microsoft/DeepSpeed/pull/2507 https://github.com/microsoft/DeepSpeed/pull/2775 CI: https://github.com/microsoft/DeepSpeed/blob/master/.github/workflows/nv-torch-latest-cpu.yml
It depends on the feature you want to use. Like I said earlier only CPUAdam and CPUAdagrad are now available in cpu-only environments, and for both cases the usage is...
@AbhayGoyal, yes, the intel PRs would enable CPU inference. Please keep an eye on those. You can also ask questions directly on those PRs.