FlagEmbedding icon indicating copy to clipboard operation
FlagEmbedding copied to clipboard

finetuning failing

Open riyaj8888 opened this issue 1 year ago • 2 comments

i m getting following error when follwoing the examples finetune steps.

RuntimeError: Expected to mark a variable ready only once. This error is caused by one of the following reasons: 1) Use of a module parameter outside the forward function. Please make sure model parameters are not shared across multiple concurrent forward-backward passes. or try to use _set_static_graph() as a workaround if this module graph does not change during training loop.2) Reused parameters in multiple reentrant backward passes. For example, if you use multiple checkpoint functions to wrap the same part of your model, it would result in the same set of parameters been used by different reentrant backward passes multiple times, and hence marking a variable ready multiple times. DDP does not support such use cases in default. You can try to use _set_static_graph() as a workaround if your module graph does not change over iterations. Parameter at index 195 with name model.encoder.layer.11.output.LayerNorm.weight has been marked as ready twice. This means that multiple autograd engine hooks have fired for this particular parameter during this iteration.

riyaj8888 avatar Apr 30 '24 09:04 riyaj8888

You should show your command and the environment so that I can analyze the possible cause.

staoxiao avatar Apr 30 '24 14:04 staoxiao

In my case, In my case, it happened when I didn't apply the "deepspeed" setting.

If I didn't apply the --deepspeed setting, I shouldn't have applied the --gradient_checkpointing option either.

jhyeom1545 avatar May 02 '24 04:05 jhyeom1545