why228430

Results 10 comments of why228430

Have you tested successfully on CUDA11.0 ?@hphuongdhsp

Thanks for your replay! I have update my Pytorch==1.1,CUDA=10.0, the up problem have solved,but I meet a new problem inside_flags4 = torch.tensor(valid_flags, dtype=torch.uint8) THCudaCheck FAIL file=src/riroi_align_kernel.cu line=389 error=7 : too...

Thank you so much for answering my question in the night! I did as you said, but the problem still exists ! My GPU is 2080ti with Pytorch=1.1.0, cuda=10.0. When...

First I update the demo! then I del all *.so and re-compile the ops. finally I use recommend "python setup.py develop" the "polyiou" has solved ! but the " RuntimeError:...

I find from the issue that someone has run code .but I also meet the " RuntimeError: cuda runtime error (7) : too many resources requested for launch at src/riroi_align_kernel.cu:389"...

After change THREADS_PER_BLOCK, you should re-compile "bash compile.sh"!

I don't use docker! I am not sure if your environment problem! My environment is 2080ti with Pytorch=1.1.0, cuda=10.0. In my platform the project is okay!

{"mode": "train", "epoch": 1, "iter": 100, "lr": 0.00465, "time": 1.45851, "data_time": 0.21686, "memory": 4806, "loss_rpn_cls": 0.29471, "loss_rpn_bbox": 0.11749, "s0.rbbox_loss_cls": 0.48488, "s0.rbbox_acc": 88.70508, "s0.rbbox_loss_bbox": 0.71225, "s1.rbbox_loss_cls": 0.32175, "s1.rbbox_acc": 92.81233, "s1.rbbox_loss_bbox": 0.22031,...

Thanks for your help! I have solved the problem.

Did you met the error? 2020-10-17 10:25:11,392 - mmdet - INFO - Saving checkpoint at 1 epochs completed: 0, elapsed: 0sTraceback (most recent call last): File "tools/train.py", line 161, in...