yaxu
yaxu
Yes, if you want to train the network on a new dataset, you can just reimplement the function to read the camera intrinsic. https://github.com/EryiXie/PlaneRecNet/blob/cea1e8b1edf054f59e15891fa799cdcc5feda72d/data/datasets.py#L174 But for inferencing with any data...
The easiest way will be to use [open3d](http://www.open3d.org/docs/release/getting_started.html), to convert the depth image into a point cloud with colors, I will need a similar script anyway later on, so I...
Now I feel very guilty that I promised something and forgot to do it. I am sorry about that... But yes, please share the code. I think there must be...
> I solved this problem by reorganizing the PyTorch version and Cuda version and installing some modules by hand. Hope that's helpful:) Great to know that you have already solved...
Yes, I filtered out very small plane areas and did not put them inside the annotation file. In the implementation of PlaneRCNN, small planes are filtered out while reading masks...
Ahhh.. I am so sorry that I forget to reply to you. I just tested again and I get the same result as the paper reported. To output .mat format...
Please try to compile and use the C++ SensReader, which is way more faster than python. Another tip, be aware the imageio version you installed. Correct version: (==1.6), as reported...
Hi, when trained on ScanNet dataset, the depth shift is 1000. So the per-pixel depth value is in "mm". However, so many aspects limit the depth prediction from a simple...