superpoint_graph icon indicating copy to clipboard operation
superpoint_graph copied to clipboard

problem with the subsample of super point for gru training

Open ruomingzhai opened this issue 3 years ago • 5 comments

i have a question about the subgraph.your paper says the total number of super points in each batchsize was Subsample to 512 (derived from --args.hardcutoff).I am not sure it was before embedding or after the embedding(i.e. in graphconv).Because I print the size of embedding output in batchsize =2,it is Tensor.size(1174,32),a little bit larger than 512*2=1024.Can you explain and point out to me where exactly is the subsample code.It bothers me long time!Thx!

ruomingzhai avatar Apr 18 '21 18:04 ruomingzhai

Is this during training or inference?

loicland avatar Apr 19 '21 10:04 loicland

Is this during training or inference?

During training

ruomingzhai avatar Apr 19 '21 10:04 ruomingzhai

Ah yes. The hard cutoff is the maximum number of "valid" superpoints to embed with the CloudEmbedder. A superpoint is valid if it has more than args.ptn_minpts points. Indeed, superpoints with too few points are attributed an all zero embedding at no computational cost.

In your batch you have: 1024 spoints with > 40pts, and 50 with <40 points.

See the following lines for the details: /learning/spg.py#L143 /learning/spg.py#L123 /learning/pointnet.py#L149

loicland avatar Apr 19 '21 14:04 loicland

Ah yes. The hard cutoff is the maximum number of "valid" superpoints to embed with the CloudEmbedder. A superpoint is valid if it has more than args.ptn_minpts points. Indeed, superpoints with too few points are attributed an all zero embedding at no computational cost.

In your batch you have: 1024 spoints with > 40pts, and 50 with <40 points.

See the following lines for the details: /learning/spg.py#L143 /learning/spg.py#L123 /learning/pointnet.py#L149

Oh I see.why I care about the shape of embeding is I want to use your superpoint as a fixed 1024 dimension embedding to feed my own designed model.Is there anyway to get just 1024 superpoint,which reqires the ptn_hardcutoff rule.

ruomingzhai avatar Apr 21 '21 13:04 ruomingzhai

replacing:

G = k_big_enough(G, args.ptn_minpts, args.spg_augm_hardcutoff)

by

G = G.subgraph(range(args.spg_augm_hardcutoff))

should work.

loicland avatar Apr 21 '21 13:04 loicland

Hi!

We are releasing a new version of SuperPoint Graph called SuperPoint Transformer (SPT). It is better in any way:

✨ SPT in numbers ✨
📊 SOTA results: 76.0 mIoU S3DIS 6-Fold, 63.5 mIoU on KITTI-360 Val, 79.6 mIoU on DALES
🦋 212k parameters only!
⚡ Trains on S3DIS in 3h on 1 GPU
Preprocessing is x7 faster than SPG!
🚀 Easy install (no more boost!)

If you are interested in lightweight, high-performance 3D deep learning, you should check it out. In the meantime, we will finally retire SPG and stop maintaining this repo.

loicland avatar Jun 16 '23 09:06 loicland