ArseniuML

Results 11 comments of ArseniuML

@CaptainRui1000 how did you get TensorRT plugin work? I wrote symbolic op and C++ plugin for SparseImplicitGemmFunction, then exported my NN with spconv.SubMConv2d in ONNX. Did you do the same?...

// static_num_act_in is just out_inds_num_limit of previous conv layer. // for regular conv, the input tensor has static shape, we should save a CPU // variable of real num_act_out. here...

Unfortunately, I am stuck on exporting aten::all operation to ONNX. It seems that PyTorch update is needed, but I can't launch FSDv2 even with PyTorch 1.9.0.

1. Export to ONNX is hard because of errors "div() takes 3 positional arguments but 4 were give" (related to aten::div operation). It seems that I have to re-implement all...

I am trying to deploy spconv, but there is one limitation, which seems to me fundamental. Spconv outputs shape is data-dependent , i.e. Dense 1d matrix: 00001110000 Sparse 1d matrix...

I think I must feed num_act_out_real (real_indices_num) to some FSD-specific layers in order they can perform slicing. Or perform padding like this: 1 0 3 5 4 6 0 -1...

> Fix the dynamic input size of backbone and rcnn. As I can understand, there are 2 backbones - backbone of the segmentor, which is SimpleSparseUNet, and backbone of SingleStageFSDv2,...

It seems that TensorRT 8.6 supports data-dependent operations (for example NonZero). Data-dependent operations still are not supported in plugins, but there is a workaround for this - add additional output...

If I set multiscale_features=None in `extract_output = self.extract_feat(combined_out, dict_to_sample, multiscale_features=None)` would it be a serious problem for FSDv2?

It seems that I can't deploy FSDv2 due to memory limitations (TensorRT requests > 30 Gb GPU memory - and this impossible for me) @Abyssaledge is there any chance to...