Anton Lokhmotov
Anton Lokhmotov
@Mamtesh11 That's an excellent question, I [was wondering](https://github.com/mlperf/inference_results_v0.5/issues/7) the same. The answer from NVIDIA was [maybe](https://github.com/mlperf/inference_results_v0.5/issues/7#issuecomment-559206390), but some modifications and experimentation are needed. We ([dividiti](http://dividiti.com/)) are looking into this right...
We generated TensorRT plans for the Xavier configuration on a machine with GTX 1080 (compute capability 6.1). Unfortunately, we then failed to deploy it on both TX1 (compute capability 5.3)...
@nvpohanh Yes, but we have maxed out our 128 GB SD card on TX1, which had at least 70 GB of free space when we started :). I've ordered a...
Further to previous question, is it still necessary to download COCO and object detection models if all I want is to generate TensorRT plans for ResNet?
@nvpohanh How about calibration? Don't you need real data for that?
@nvpohanh We have suspected the same about sharing: calibration done on GTX 1080 seems to be quite similar to that done on TX1. On TX1 and TX2, we get around...
Thanks! But I'm now wondering about @Mamtesh11's original question. Will the way NVIDIA constructed optimized TensorRT plans for Xavier work for the older devices with FP32/FP16 support only? (As I...
@nvpohanh I get it that I need to generate and run plans on the same platform. But, IIRC, you construct one graph (SSD Large?) layer by layer. Do you specify...
(That's what I meant by "**the way** NVIDIA constructed optimized TensorRT plans for Xavier".)
Thanks @nvpohanh, will do!