gsplat
gsplat copied to clipboard
add absgrad support to distributed mode
https://github.com/nerfstudio-project/gsplat/issues/702 solved
Use register_hook to all_to_all absgrad. After backward, hook should be remove to avoid OOM. Not very elegant implement but works. We can make all_to_all_tensor_list to an autograd funtion and add absgrad all_to_all in backward.