torch2trt
torch2trt copied to clipboard
add_constant incompatible function arguments
Hi,
I am trying to convert my YOLO model trained on Pytorch into tensorRT compatible model. On following the steps I get the following error message. I saw other people getting the same error but in a different scenario. I hope somebody can put light on the reason for this error as the stack trace doesn't give much information. It would be great if anyone can provide a solution to this.
I tried this https://github.com/NVIDIA/TensorRT/issues/213 to fix my problem but it doesn't work.
Warning: Encountered known unsupported method torch.arange
Warning: Encountered known unsupported method torch.Tensor.repeat
Traceback (most recent call last):
File "convert.py", line 12, in
Did you ever figure out a fix for this?
I encountered the same problem!
I first converted my model into .onnx format and then simplified it using this repo https://github.com/daquexian/onnx-simplifier and then to .trt using this repo.
I did not find a way or help from this repo to resolve the issue.
shape = tuple(t.shape) # don't exclude batch when adding constants...?
weight = t.detach().cpu().numpy()
weight = trt.Weights(np.ascontiguousarray(weight, dtype=np.float32)) # fixed
t._trt = network.add_constant(shape, weight).get_output(0)
Hi,
I recently upgraded from PyTorch 1.3 and TRT 6 to PyTorch 1.6 and TRT 7 and this problem has also come up (what versions are you guys using?). @yuzhiyiliu's fix worked for me, but I'm also seeing the same issue on line 159 of torch2trt.py:
scalar = t * torch.ones(shape, dtype=dtype).cpu().numpy()
trt_tensor = network.add_constant(shape, scalar).get_output(0)
I'm going to guess there are probably other areas of the code that are also broken.
Is this a PyTorch version compatibility issue and can we see this fixed soon?
Tell you secretly, the YOLO layer has no weight and does not to be converted.
@Jaftem Could you solve the issue on line 159? I applied @yuzhiyiliu's solution for the weight but for the scalar, I could not find the way.
Hi, I'm getting a similar error at this line:
divisor = torch.tensor(list(range(1, input_shape[2] + 1))*input_shape[1], device='cuda').view([1, input_shape[1], input_shape[2], 1, 1])
I can't export to onnx because apparently adaptive pooling is not supported there. Does anyone know how to fix this? What does the error even mean?
TypeError: add_constant(): incompatible function arguments. The following argument types are supported:
1. (self: tensorrt.tensorrt.INetworkDefinition, shape: tensorrt.tensorrt.Dims, weights: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConstantLayer
Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f39cf3653b0>, (384,), array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1,
2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5,
6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8,
9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
15, 16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
16, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1,
2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3,
4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5,
6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16])
To @ghazalehtrb and other guys, did you solve this issue? I encountered to same problem with @ghazalehtrb, but couldn't find what to do from anywhere!
@herry123435 Unfortunately no! I still have this issue, I ended up changing the whole network for now.
Omg, I still have this issue in swinv2_tiny_window8_256 model.