penglu
penglu
I'm curious about it too. Maybe we can create a Yolov4 instance model to infer, instand of a Darknet instance in demo.py? I've already tried it, unfortunately, I failed to...
> One question here: how do you compute the memory size? > > In the TRT verbose log there will be memory info about the engine. e.g. > > ```...
How can I make sure that other ops are fp16? How can I get the data type of each layers in engine?
Is there any way to get the data type(fp16 or fp32) of layers in mixed engine during inferencing?
> I have check the verbose information, (1)'s engine size is 2334MiB(~2.28Gb), (3)'s engine size is 2323Mib(~2.27Gb), why there are so close? (3)'s layers all have turn to be fp32?
> > Is there any way to get the data type(fp16 or fp32) of layers in mixed engine during inferencing? > > No, it's only logging in the build phase....
i met the same error
> Can you try increasing the workspace size? e.g. `--pool-limit workspace:1G` I set ` --pool-limit workspace:2G` or ` --pool-limit workspace:20G `, but it did not work
> Can you try increasing the workspace size? e.g. `--pool-limit workspace:1G` my network is transformer, when run`polygraphy run ./transformer.onnx --trt`, it's ok. but run `polygraphy run ./transformer.onnx --trt-outputs mark all...
> Can you try increasing the workspace size? e.g. `--pool-limit workspace:1G` When I extract the onnx model, and only contain "input_mask -> unsqueeze" ops, the same error arose. Maybe polygraphy...