TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

✨[Feature] Adding support for collections as part of many inputs/outputs in TRTorch programs

Open narendasan opened this issue 4 years ago • 7 comments

Is your feature request related to a problem? Please describe.

Currently TRTorch can only return programs that can be represented as

f(Tensor[]) -> Tensor[]

So this includes cases like returning a single tuple or list of results or returning a single Tensor.

However we do not currently support cases like

f(Tensor[]) -> (Tensor, Tensor, (Tensor, Tensor)) 

or

f(Tensor, Tensor, (Tensor, Tensor)) -> (Tensor, (Tensor, Tensor)) 

since this requires the graph synthesis portion of the compiler to be more intelligent about the input and output.

Describe the solution you'd like

We should add the ability to store the input and output formats of the graph and be able to hook up the output of the generic tensorrt::execute_engine op to the right output format. This would require storing and packaging information about how Tensors in a flat list correspond to values in the source graph so we can reconstruct it.

Describe alternatives you've considered

We should be able to handle any form of I/O that can be reduced to Tensor[] -> Tensor[] internally

Additional context

See PRs: #283, #419

narendasan avatar Apr 05 '21 22:04 narendasan

This would have implications on the the compile spec and requirements on the user to provide information on collection size and ordering. Probably one of the easier ways in terms of usability to do this is to introduce an example input argument like torch.jit.trace which would extract information like datatype, ordering, tensor shape and collection size.

narendasan avatar May 13 '21 21:05 narendasan

This should also be able to solve the case where an input or output is dropped because it is unused in the TRT network.

narendasan avatar May 13 '21 21:05 narendasan

Comment added by Nick Comly in Aha! - View

May slip to v0.5.0, if so then push to Master and customer can get it earlier

ncomly-nvidia avatar Jul 21 '21 22:07 ncomly-nvidia

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Nov 23 '21 00:11 github-actions[bot]

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Feb 28 '22 00:02 github-actions[bot]

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Jun 01 '22 00:06 github-actions[bot]

Huge +1 on this. Would make most of tochvision models available out of the box.

ntakouris avatar Jun 23 '22 12:06 ntakouris

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Sep 22 '22 00:09 github-actions[bot]

bump

ntakouris avatar Sep 22 '22 07:09 ntakouris

Hi @ntakouris have you checked out the v1.2 release? It includes a first pass on collections (list & tuple I/O).

@narendasan is this issue OK to close?

ncomly-nvidia avatar Sep 22 '22 17:09 ncomly-nvidia

Yeah this is good to close

narendasan avatar Sep 22 '22 18:09 narendasan