Buffer dtype mismatch in nearest_neighbors calculation on Win
Hi! Working on Win 10, Python 3.6.12
I'm trying to do a simple test run for ModelNet40 with a code like this added at the end of networks\network_classif.py file:
if __name__ == "__main__":
torch.manual_seed(125)
positions = torch.arange(1, 37, dtype=torch.float)
features_batch = positions.view(2, -1, 3)
print('In batch shape: {}'.format(features_batch.shape))
net = ModelNet40(1, 5)
print(net(features_batch, features_batch))
It crashes with an exception in the knn:
Traceback (most recent call last):
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\networks\network_classif.py", line 75, in <module>
print(net(features_batch, features_batch))
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\networks\network_classif.py", line 41, in forward
x1, pts1 = self.cv1(x, input_pts, 32, 1024)
File "C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\conv.py", line 50, in forward
indices, next_pts_ = self.indices_conv_reduction(points, K * dilation, next_pts)
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\layer_base.py", line 16, in indices_conv_reduction
indices, queries = nearest_neighbors.knn_batch_distance_pick(input_pts.cpu().detach().numpy(), npts, K, omp=True)
File "knn.pyx", line 138, in nearest_neighbors.knn_batch_distance_pick
ValueError: Buffer dtype mismatch, expected 'int64_t' but got 'long'
Switching to legacy_layer_base does not give an easy workaround, unfortunately. In this case, I get errors with multiprocessing
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\networks\network_classif.py", line 8, in <module>
from convpoint.nn import PtConv
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\__init__.py", line 1, in <module>
from .conv import PtConv
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\conv.py", line 9, in <module>
from .legacy.layer_base import LayerBase
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\legacy\layer_base.py", line 53, in <module>
class LayerBase(nn.Module):
File "d:\MyDocs\GigaKorea\Poin Cloud NNs\ConvPoint\convpoint\nn\legacy\layer_base.py", line 56, in LayerBase
pool = mp.Pool(16)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 119, in Pool
context=self.get_context())
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 174, in __init__
self._repopulate_pool()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
w.start()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\ProgramData\Anaconda3\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Hello, Sorry for the inconvenience. The code was only tested on Ubuntu Linux, so I am not sure the C++ part is compatible with Windows10. As for the legacy knn, I am not sure it still works (lot a modification have been done since I used it). If you put the thread number in the dataloader to 0, you might have a more informative error as the multiprocessing would not be used.
Note: I am currently moving to LightConvPoint which should be easier to use and faster. If you do not need the pre-trained models you could give a try.