HongCheng
HongCheng
> Hi @ everyone i got same error but did not find any solution, please help me I have install PyGObject but when I run .py module it got me...
> When I tried to train bevformer2, I used two 3090 GPUs for training and reported an error of `ERROR: torch. distributed. final. multiprocessing. api: failed (exitcode: -6) local rank:...
> > When I tried to train bevformer2, I used two 3090 GPUs for training and reported an error of `ERROR: torch. distributed. final. multiprocessing. api: failed (exitcode: -6) local...
@Abyss-J Thank you for your reply. And could you please share how you try to run the code of `bevformernet2` and the python config methods? I want to check it....
> @aharley @LHY-HongyangLi When I run bevformer: > > > python train_nuscenes.py \ > > ``` > --exp_name="bevformer" \ > --max_iters=25000 \ > --log_freq=1000 \ > --dset='trainval' \ > --batch_size=8...
Did you solve the problem? I also have it now
I think this is a good idea.
Could you please propose a pr in the [dataset section](https://github.com/mini-sora/minisora#dataset_paper)?
> Could you please propose a pr in the [dataset section](https://github.com/mini-sora/minisora#dataset_paper)? @mutonix
> I met the same error. I build the container based on **nvcr.io/nvidia/pytorch:24.04-py3** docker image and install **xformers** from source code to maintain torch version. (Otherwise, it made a torch...