When I train with multi-GPU, the `autotuner.py` function in triton pops up `full_nargs = {**self.nargs, **kwargs, **self.best_config.kwargs} TypeError: 'NoneType' object is not a mapping` error
Thank you very much for your excellent work!
But I'm having a problem with mamba2. When I train with multi-GPU, the autotuner.py function in triton pops up full_nargs = {**self.nargs, **kwargs, **self.best_config.kwargs} TypeError: 'NoneType' object is not a mapping error
but when i train with single-GPU, the error doesn't trigger.
It is not necessary to make additional configuration when training with multiple GPUs
Has it been resolved?
I have met the same problem, have you solved it?
@sugardoll223 If you want to try multi-GPU distributed training, you can use Kotomamba, as the Mamba2 block likely requires a single GPU.
Same. Have you solved this problem?