Glenn Jocher

Results 1984 comments of Glenn Jocher

@sck-star if this is due to multiple pytorch hub models loaded then this has been resolved in master and your code is out of date. Else please submit exact commands...

@Yolobeginner 👋 Hello! Thanks for asking about **training speed issues**. YOLOv5 🚀 can be trained on CPU (slowest), [single-GPU](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data), or [multi-GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training) (fastest). If you would like to increase your training...

@Yolobeginner yes YOLOv5 supports training with RTX 2060.

@Imtaiyaz-S 👋 Hello! Thanks for asking about **Export Formats**. YOLOv5 🚀 offers export to most popular formats used today. See our [TFLite, ONNX, CoreML, TensorRT Export Tutorial](https://docs.ultralytics.com/yolov5/tutorials/model_export) for details. ##...

@moahaimen LSTM models must be trained on video datasets. So as I tell everybody the first step is to have labelled data of the type you want to be able...

@Zengyf-CVer thanks for the bug report! I think this is related to https://github.com/ultralytics/yolov5/issues/6962 and https://github.com/pytorch/pytorch/issues/74016, and should be resolved in torch>1.12.0 I see you have torch 1.11.0 installed. Can you...

@Zengyf-CVer awesome, thanks for testing!!

@fatejzz I'm sorry, we don't have resources to review custom code, but we have a few YOLOv5 C++ Inference examples on ONNX and OpenVINO exported models here: ## C++ Inference...

@fatejzz yes, most exports require fixed input sizes. I think only PyTorch and ONNX --dynamic support dynamic input sizes.

@git-hamza I'd recommend using the same TF version for export and inference.