RandLA-Net
RandLA-Net copied to clipboard
Problems with running main_Semantic3D.py
Hi, I managed to prepare the data and wanted to train the network with main_Semantic3D.py, but this error comes:
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject
Is it a problem with numpy? my version is 1.16.0. Should I up- or downgrade it perhaps? Thanks!
Hi @Esteban-25. Thanks for your interest in our work.
Have you followed the instruction of our methods? Setup the environment and install required packages, and also compile the custom operation? As shown in here
Yes, I followed everything as specified in the repository. And everything worked up until training. I installed tensorflow-gpu=1.11 in the environment. Can it be that some packages in python 3.5 are not compatible with this version of tf any longer?
Sorry, I just found I missed this issue, have you already fixed this problem? If not, I would suggest you try to upgrade the numpy version, e.g.,
pip install numpy==1.16.1
Hi I have a problem running the main_semantic3D.py. I followed the instructions as it was suggested. However, each time I run the routine, I see this error message, which I do not know why this happens?
python main_Semantic3D.py --mode train --gpu 0
Load_pc_0: bildstein_station1_xyz_intensity_rgb
Traceback (most recent call last):
File "main_Semantic3D.py", line 349, in
I would appreciate if you could give a hint, how can i solve this issue. PS, I am using your own dataset to only test the routine, for the first time.
Same occurs for me. After data prep, I installed tensorflow-gpu using conda install tensorflow-gpu for training and the error occurs.
Can you please share the conda env details or requirements.txt for training