yolo_v1_v2_tensorflow
yolo_v1_v2_tensorflow copied to clipboard
Simple implementation of yolo v1 and yolo v2 by TensorFlow
yolo1_tensorflow
Simple implementation of yolo v1 and yolo v2 by TensorFlow
Introduction
Paper yolo v1: You Only Look Once: Unified, Real-Time Object Detection
Paper yolo v2: YOLO9000: Better, Faster, Stronger
The code of yolo v2, we use 9 anchors which is calculated by k-means on COCO dataset.
| data augmentation | pretrained vgg16 | pretrained darknet |
|---|---|---|
| :x: | :heavy_check_mark: | :x: |
What is normalized offset?

Requirements
==============
- python3.5
- tensorflow1.4.0
- pillow
- numpy
- scipy
Pretrained VGG16: Google Drive: https://drive.google.com/open?id=1LTptCY96ABAUlJHUJq6MhqNrDQN7JfQP
Dataset: Pascal voc 2007: https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
==============
Results
![]() |
![]() |
|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Reference
[1]. Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
[2]. Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7263-7271.







