SurfaceNetworks icon indicating copy to clipboard operation
SurfaceNetworks copied to clipboard

Could you share the preprocessing code?

Open spk921 opened this issue 5 years ago • 9 comments

I would like to try running ModelNet40 classification task and hard to find preprocessing the data. Could you share the code for the preprocessing? Thank you.

spk921 avatar Jul 02 '19 19:07 spk921

Hi,

The preprocessing you need would be Laplacian and Dirac matrix for every mesh. A python version is implemented here and I suggest to try out the Laplacian version first since it's faster. https://github.com/jiangzhongshi/SurfaceNetworks/blob/master/src/utils/mesh.py#L114

Also, if you are familiar with C++ python binding, you can also use the more efficient one from libigl https://github.com/jiangzhongshi/libigl/blob/pyigl/python/py_igl/py_cotmatrix.cpp for Laplacian and https://github.com/jiangzhongshi/libigl/blob/pyigl/python/py_igl/py_dirac_operator.cpp for Dirac

jiangzhongshi avatar Jul 02 '19 20:07 jiangzhongshi

On a side note, some meshes in ModelNet40 may contain degenerate triangles etc and in that case Laplacian will contain entries that are NaN. In that case, you can also try the intrinsic laplacian as the operator: https://github.com/libigl/libigl/blob/master/include/igl/cotmatrix_intrinsic.h

jiangzhongshi avatar Jul 02 '19 20:07 jiangzhongshi

On a side note, some meshes in ModelNet40 may contain degenerate triangles etc and in that case Laplacian will contain entries that are NaN. In that case, you can also try the intrinsic laplacian as the operator: https://github.com/libigl/libigl/blob/master/include/igl/cotmatrix_intrinsic.h

When i run add_laplacian.py an error pop up at train_data =np.load: '' _pickle.UnpicklingError:invalid load key,' ' ''...is this related to the issue above?

SimonPig avatar Nov 05 '19 03:11 SimonPig

@SimonPig No, I think it is most likely to be the version incompatibility between python 2 and 3. In np.load, supply the additional parametre encoding=latin1 and see if that helps.

jiangzhongshi avatar Nov 05 '19 12:11 jiangzhongshi

@jiangzhongshi i've re-produced train.npy with create_data.py,but the size of my train.py is 4.5GB which is too big for np.load() in add_laplacian(EOFError: Ran out of input), i dont understand why yours are only 1GB? :::(

SimonPig avatar Nov 07 '19 08:11 SimonPig

@SimonPig I am really not familiar with the storage scheme. But a solution for the size is to save different sequence to different .npy files and I believe we did it for as_rigid_as_possible

jiangzhongshi avatar Nov 08 '19 01:11 jiangzhongshi

@jiangzhongshi Perhaps one module called ‘seism’ is missing?it‘s supposed to be imported in utils.mesh,line 138

SimonPig avatar Nov 08 '19 03:11 SimonPig

@jiangzhongshi No problem,thank you for keeping replying ;)so u will upload it later?

SimonPig avatar Nov 08 '19 06:11 SimonPig