mamba icon indicating copy to clipboard operation
mamba copied to clipboard

No grad accumulator for a saved leaf

Open hzm-January opened this issue 1 year ago • 4 comments

I get an error. ../torch/csrc/autograd/saved_variable.cpp":216, please report a bug to PyTorch. No grad accumulator for a saved leaf.

It's thrown at mamba_ssm\ops\selective_scan_interface.py backward(ctx, dout)

(xz, conv1d_weight, conv1d_bias, x_dbl, x_proj_weight, delta_proj_weight, 
         conv1d_out, delta, A, B, C, D, delta_bias, scan_intermediates, out) = ctx.saved_tensors

image

Could you please give me some hints? Thank you.

hzm-January avatar Jan 24 '24 17:01 hzm-January

RuntimeError: !grad_accumulator_.expired() INTERNAL ASSERT FAILED at "../torch/csrc/autograd/saved_variable.cpp":216, please report a bug to PyTorch. No grad accumulator for a saved leaf

hzm-January avatar Jan 29 '24 11:01 hzm-January

same question,do you solve it?

ZitengXue avatar Mar 13 '24 03:03 ZitengXue

I got the same error when using cuda 11.7, torch2.0.0.

tyshiwo1 avatar Mar 17 '24 08:03 tyshiwo1

I have solved my problem. First, I clean up the useless files in directory where /pip is in to release enough space. Then, I copy the files of ./build/ from another server which has compiled successfully to the server which fails compiling. Last, I use CAUSAL_CONV1D_FORCE_BUILD=TRUE pip install --user -e . to compile. Maybe --user is necessary?

Although there is a GCC warning of unsupported gnu_property_type, the mamba code can run.

tyshiwo1 avatar Mar 27 '24 08:03 tyshiwo1