intruder1998
intruder1998
Hi @glenn-jocher , What is the white box behind caching (--cache ram) flag and its direct purposes related to yolov5 training? Can you please brief it out so that I...
Hi @glenn-jocher , Thanks for your valuable suggestions but what's the reason behind using **Train on cached data: python train.py --cache (RAM caching) or --cache disk (disk caching)** as you...
Hi @glenn-jocher , I was able to do sequential inference using detect.py script and batch inferencing using val.py script. One strange thing observed is that, in batch inferencing the time...
Hi @glenn-jocher , Actually I have one question. If we convert pytorch models to TensorRT models with specific batch size say 4, then we get TensorRT model with Batch Size...
> Hi @quic-hitameht - Could you please give some examples of how we got symbolic tracing to work for Yolo v5. Just a couple examples might be sufficient. Yes that...
> Ok maybe I misunderstood something. From my understanding: in training, there is : > > 1. resizing (so it's not a crop and it affects image quality) from initial...
> We select vision transformer as our feature extractor, which means the input images should be resized to the fixed image size(224X224). Hi @Stephen0808 In training I could see resizing...
> I'm training the segmentation EfficientViT B1 on Cityscapes, and achieving ~0.6 mIoU, however the reported results are around 0.8 mIoU. > > Would you be able to offer some...
> I'm trying to run FP16 inference using TensorRT 8.5.2.2 on a Xavier NX device, and getting NaN or garbage values. Has anyone encountered a similar issue? > > *...
> I'm using a propriety script but you can look at NVIDIA's examples. Running TRT models is usually the same regardless of model architecture, as long as the inputs and...