temporal-shift-module
temporal-shift-module copied to clipboard
Does this pytorch model support TensorRT acceleration?
when I was converting pytorch model to onnx model, I met below warning and I thought it may be the reason that why two eval results(pytorch and onnx) mismatch.
/tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:40: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, :-1, :fold] = x[:, 1:, :fold] # shift left /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:40: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, :-1, :fold] = x[:, 1:, :fold] # shift left /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:41: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:41: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:42: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:42: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
I think you can try torch2trt
I think you can try torch2trt
@z13974509906 I have tried torch2trt, but it's still not working. Initializing TSN with base model: resnet50. TSN Configurations: input_modality: RGB num_segments: 16 new_length: 1 consensus_module: avg dropout_ratio: 0.8 img_feature_dim: 256
=> base model: resnet50 Adding temporal shift... => n_segment per stage: [16, 16, 16, 16] => Processing stage with 3 blocks residual => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Processing stage with 4 blocks residual => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Processing stage with 6 blocks residual => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 => Processing stage with 3 blocks residual => Using fold div: 8 => Using fold div: 8 => Using fold div: 8 [TensorRT] ERROR: (Unnamed Layer* 0) [Shuffle]: input and output volume mismatch. input volume is 2408448 and output volume is 150528
Did you solve this problem?
The TensorRT implementation of TSM-R50 could refer to wang-xinyu/tensorrtx
when I was converting pytorch model to onnx model, I met below warning and I thought it may be the reason that why two eval results(pytorch and onnx) mismatch.
/tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:40: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, :-1, :fold] = x[:, 1:, :fold] # shift left /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:40: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, :-1, :fold] = x[:, 1:, :fold] # shift left /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:41: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:41: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, 1:, fold: 2 * fold] = x[:, :-1, fold: 2 * fold] # shift right /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:42: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift /tmp-data/sunmo/trt7/torch_onnx/ops/temporal_shift.py:42: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator copy_ (possibly due to an assignment). This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. out[:, :, 2 * fold:] = x[:, :, 2 * fold:] # not shift
how to convert onnx thanks