ColossalAI icon indicating copy to clipboard operation
ColossalAI copied to clipboard

Gradients are None after booster.backward

Open ArnaudFickinger opened this issue 1 year ago • 10 comments

After calling booster.backward(loss=loss, optimizer=optimizer), all gradients of model.module are None. Is there a way to access the gradients?

ArnaudFickinger avatar Jun 11 '24 03:06 ArnaudFickinger

I meet the same problem,have you found a solution?

B-Soul avatar Jun 11 '24 04:06 B-Soul

hey @ArnaudFickinger @B-Soul , could you please share the settings of your scripts?

botbw avatar Jun 11 '24 05:06 botbw

My code is related to my own ongoing research, so it is not convenient to share. But I just changed the distributed framework used to Huggingface Accelerate, and gradients are not None. So, I think there is a bug in colossalai framwork.

B-Soul avatar Jun 11 '24 05:06 B-Soul

My code is related to my own ongoing research, so it is not convenient to share. But I just changed the distributed framework used to Huggingface Accelerate, and gradients are not None. So, I think there is a bug in colossalai framwork.

hi @B-Soul , a snippet of optimizer/plugin settings will help. Besides, the gradient accessing API might be different due to optimization, if you are using LowLevelZeroOptimizer or GeminiOptimizer, you could check those tests for gradient accessing: genimi and low-level

botbw avatar Jun 11 '24 05:06 botbw

@botbw thank you the low-level snippet is working! By the way which of gemini or low-level should I use for best performance with 1 to 8 A100 GPUs and 500M to 2B trainable parameters?

ArnaudFickinger avatar Jun 11 '24 16:06 ArnaudFickinger

@botbw thank you the low-level snippet is working! By the way which of gemini or low-level should I use for best performance with 1 to 8 A100 GPUs and 500M to 2B trainable parameters?

@ArnaudFickinger Glad to hear that! And we might work on the API to make it more intuitive.

Regarding the performance, LowLevelZeroOptimizer implements zero-1 and zero-2 and GeminiOptimizer implements zero-3 together with continuous memory optimization (i.e. memory locality, you may check this doc for more information) to reduce communication cost.

Generally speaking, you should choose the plugin by the intended zero-n parallel strategy, the real-world performance might be case-by-case and depend on the trade-off between computation and communication.

Do let us know if you have further doubts :p

botbw avatar Jun 12 '24 01:06 botbw

@botbw when I define 2 param_groups the id() of the parameters of the second group do not match any keys of optimizer._grad_store._grads_of_params[1]

ArnaudFickinger avatar Jun 15 '24 07:06 ArnaudFickinger

@botbw when I define 2 param_groups the id() of the parameters of the second group do not match any keys of optimizer._grad_store._grads_of_params[1]

@ArnaudFickinger I guess it's unexpected since each group is handled separately in the same way (like a for loop), would you mind sharing the version (or commit) you are using and a min repro if possible?

botbw avatar Jun 15 '24 08:06 botbw

@botbw I have written a min repro with a simple network and in this case the keys actually match! I will take a closer look at my code and get back to you if I believe the issue might still be ColossalAI related.

ArnaudFickinger avatar Jun 15 '24 16:06 ArnaudFickinger

@botbw I have written a min repro with a simple network and in this case the keys actually match! I will take a closer look at my code and get back to you if I believe the issue might still be ColossalAI related.

@ArnaudFickinger Sure, feel free to ask here or raise a new issue

botbw avatar Jun 15 '24 16:06 botbw

hey folks, I'm closing this issue since it's been stalled for a while, feel free to reopen or submit a new issue if you still have any doubts

botbw avatar Jul 19 '24 07:07 botbw