torchsparse
torchsparse copied to clipboard
About aggregation and interpolation
hello, zhijian,
- i want to do a aggregation and interpolation in sparse voxel, there are my code:
# aggregate + upsampling
def aggregate(x:SparseTensor, voxel_size:int):
z = PointTensor(x.F, x.C.float())
x0 = initial_voxelize(z, 1, voxel_size) # voxelization
z0 = voxel_to_point(x0, z, nearest=True) # voxel to point
x1 = SparseTensor(z0.F, x.C, 1)
print(x1.C.equal(x.C))
print(x1.F.equal(x.F))
return x1
in x1 = SparseTensor(z0.F, x.C, 1)
, i use x.C and z0.F to construct output x1, did I do something wrong?
- in this case, the
kernel == voxel_size
andstride == voxel_size
, could I understand in this way?
thanks a lot! : )
Hi @suyunzzz, I don't fully understand your need (what the goal is for your operator). Could you please elaborate a bit more?
Hi @suyunzzz, I don't fully understand your need (what the goal is for your operator). Could you please elaborate a bit more?
Hello, zhijian,
In the MinkowskiEngine, there are some LocalPooling function, i.e. MinkowskiAvgPooling, not global_avg_pool, i want to know how to implement it(local_avg_pool
) in torchsparse.
it just used to a multi-scale aggregate module ^^,like this:
i am not sure whether i expline clearly, thanks a lot
I haven't checked the details of your implementation, but I think you are on the right track.
From my understanding, you might need to pass x.s
to reconstruct the sparse tensor rather than 1
.
Adding @Xiuyu-Li and @kentangSJTU to this thread. We might need to support upsampling (nearest / trilinear) and downsampling in the future.
I haven't checked the details of your implementation, but I think you are on the right track.
From my understanding, you might need to pass
x.s
to reconstruct the sparse tensor rather than1
.
Because it just aggregate on input resolution, so i pass 1
.
i will check again, thanks a lot
Hi, @suyunzzz @zhijian-liu , sorry to bother you. I am looking for the torchsparse counterpart for MinkowskiInterpolation()
. Any update on this issue? Or any possible suggestions for me to investigate will be greatly appreciated, thank you.
@HaFred, thanks for your interest! Unfortunately, we haven't been able to look into this issue yet.