afm_cvpr2019
afm_cvpr2019 copied to clipboard
ValueError: num_samples should be a positive integer value, but got num_samples=0
Does anyone meet this issue?
@tianyu0523 Did you get a semgentation fault the previous time you ran that command? Because that is happening with me, and I traced it to the fact that the cache wasn't being built because something about the lib.afm_op
custom code is failing and seg faulting on my AWS EC2 instance.
So if you comment out lines 48 - 53 in dataset/cache.py
maybe you'll also see the same segmentation fault that I'm seeing?
I'm wondering if there's anything incompatible with the custom cuda operator to CUDA version 9.0 because on the AWS EC2 instance it's running CUDA version 10.1...
Does anyone meet this issue?
I guess this problem is caused by the cache directory. Please try to clean up the data/.cache directory and run the command again.
I'm wondering if there's anything incompatible with the custom cuda operator to CUDA version 9.0 because on the AWS EC2 instance it's running CUDA version 10.1...
I tested my code on several machines with CUDA 9.0, 9.2 and 10.0, there is no incompatible issue.
@cherubicXN thank you for replying!
I have a local GPU running ubuntu v. 18.04, python 3.6, and CUDA 9.2 and I have gotten your code working, but I still can't get it to run on the pytorch_p36 environment on the AWS EC2 GPU machines (p3.2xlarge, with python 3.6, cuda 10.1). For me it fails with a segmentation fault (!) at line 97 in cache.py where the command to the custom code is called:
afmap, label = afm(lines.cuda(),shape_info.cuda(), self.afm_res[0],self.afm_res[1])
If you're able to provide any input on this, so that I can run it on the new AWS EC2 instances, that'd be great!
@tianyu0523 Did you get a semgentation fault the previous time you ran that command? Because that is happening with me, and I traced it to the fact that the cache wasn't being built because something about the
lib.afm_op
custom code is failing and seg faulting on my AWS EC2 instance.So if you comment out lines 48 - 53 in
dataset/cache.py
maybe you'll also see the same segmentation fault that I'm seeing?
Yes. I met the same issue 'Segmentation fault (core dumped)' when I run the code on remote machine.
Are you using an AWS EC2 instance, running CUDA 10.1 on python 3.6? If yes, then that version of CUDA may be the problem-I just dont know how to fix it!
I also encountered this problem, the solution is: delete the .cache folder under data / wireframe, and then run python train.py again.
@cherubicXN I need help in knowing if anyone trained the model for category-specific objects (introducing a new oject category) I need somewhat precise 2D contour detection that'll allow me to overlay a 3D wireframe on top of that. Any help will be appreciated.
I change the 'shuffle=False' in line 17 of build.py and it works.