Zi Jian Yew

Results 83 comments of Zi Jian Yew

Hi, both uses Open3D implementation. - ICP: max_correspondence_distance=0.5. PointToPoint registration. - FGR: FPFH using radius of 0.3, and max_nn of 128. Maximum correspondence distance in FGR set to 0.01. All...

FGR operates on points directly and does not have a voxel size parameter. If you are looking at the example code (e.g. at [here](http://www.open3d.org/docs/0.8.0/tutorial/Advanced/fast_global_registration.html)), the voxel size is only used...

I modified the other paper's code to train on the partial visibility scenarios, and performed inference on them. It shouldn't be too hard to do so, you mainly need to...

Are you able to get similar results under the clean scenario? My clean scenario is the same as the unseen categories in the Deep Closest Point.

You can use my code to compute those measures, e.g. by saving the transforms into a numpy file and calling eval.py with `--transform_file your_numpy_file.npy`. The code to compute the metrics...

Hi, my suggestion is to look at the evaluation code (eval.py) and modify it to accept your point clouds. On Mon, 25 Mar 2024 at 3:57 PM, Ivan Strahija ***@***.***>...

Hi, you’ll need to set it to the pre trained checkpoint. Follow the instructions in the readme. On Fri, 19 Jan 2024 at 1:32 AM, Soed-a ***@***.***> wrote: > hi,...

@wuchengzenb this is indeed weird. Do the other settings (noise/partial) work ok?

I don’t know why your performance is worse, to be honest. You should be able to get the reported results without any change in the arguments. The network is trained...

The network already outputs the rigid transformation, e.g. see [this line](https://github.com/yewzijian/RPMNet/blob/b1c9ee0290a4f1f6515b22d496f8e1768c661c91/src/eval.py#L157) in the inference code. The provided weights are trained on XYZ+normals so you'll need normals for the point clouds....