Fully-convolutional-neural-network-FCN-for-semantic-segmentation-Tensorflow-implementation icon indicating copy to clipboard operation
Fully-convolutional-neural-network-FCN-for-semantic-segmentation-Tensorflow-implementation copied to clipboard

Augment=True fails

Open JeffOnGIT opened this issue 1 year ago • 0 comments

Hi and thanks for the excelent work you have done on yolov8.

I'm trying to understand the different between training, finetuning and transfer learning. I want to use yolov8 as the vehicle in which I share this journey with others.

I can successfully train my model from scratch using yolov8 (both n and x) using both cli and python (preferred). model = YOLO("yolov8n.pt") model.train(data='Loki4.yaml', augment=False, epochs=20, imgsz=640, name='yolov8n_Loki4_20')

I can start the fine tuning by loading weights prior to traning.

model = YOLO("yolov8n.yaml").load("yolov8n.pt") model.train(data='Loki4.yaml', augment=False, epochs=20, imgsz=640, name='yolov8n_Loki4_20')

However I cannot find a working syntax to conduct transfer learning (It does transfer the weights and redefines the classes). Normally I would refactor the output layer(s) and freeze the model to preserve the feature set prior to training. I found some yolov5 dialog on how to freeze layers and a recent call back function but not sure about where to stop freeze the model. model.add_callback("on_train_start", freeze_layer) # suggested 10 layers

Looking at the github (https://github.com/ultralytics/ultralytics/issues/189) you would freeze the backbone (P1 and P2) and modify the head. I found information there are 168 layers in yolov8n and 268 in yolov8x.You suggested to freeze only 3-5 layers, can you expand alittle to include both ends of the architecture.

Finally, I found an issue trying to implement augmentation. This is likely my use of syntax so I reverted to the coc128 dataset and the error persists.

(fiftyone) PS D:\datasets> python .\train_coco128TL.py New https://pypi.org/project/ultralytics/8.0.118 available Update with 'pip install -U ultralytics' Ultralytics YOLOv8.0.114 Python-3.9.16 torch-2.0.1+cu117 CUDA:0 (Quadro RTX 5000 with Max-Q Design, 16384MiB) yolo\engine\trainer: task=detect, mode=train, model=yolov8n.pt, data=./coco128/coco128TL.yaml, epochs=3, patience=50, batch=8, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=yolov8n_coco128, exist_ok=False, pretrained=False, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=True, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs\detect\yolov8n_coco1285

               from  n    params  module                                       arguments

0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] 4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True] 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] 6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] 22 [15, 18, 21] 1 897664 ultralytics.nn.modules.head.Detect [80, [64, 128, 256]] Model summary: 225 layers, 3157200 parameters, 3157184 gradients

Transferred 355/355 items from pretrained weights AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n... AMP: checks passed train: Scanning D:\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00<?, ?it/s] val: Scanning D:\datasets\coco128\labels\train2017.cache... 126 images, 2 backgrounds, 0 corrupt: 100%|██████████| 128/128 [00:00<?, ?it/s] Plotting labels to runs\detect\yolov8n_coco1285\labels.jpg... optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0) Image sizes 640 train, 640 val Using 8 dataloader workers Logging results to runs\detect\yolov8n_coco1285 Starting training for 3 epochs...

  Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
    1/3      1.46G      1.206      1.793       1.27        127        640: 100%|██████████| 16/16 [00:02<00:00,  7.85it/s]
             Class     Images  Instances      Box(P          R      mAP50  mAP50-95):   0%|          | 0/8 [00:00<?, ?it/s]

Traceback (most recent call last): File "D:\datasets\train_coco128TL.py", line 11, in results = model.train( File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\model.py", line 371, in train self.trainer.train() File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 192, in train self._do_train(world_size) File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 370, in _do_train self.metrics, self.fitness = self.validate() File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\trainer.py", line 476, in validate metrics = self.validator(self) File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\engine\validator.py", line 165, in call self.loss += model.loss(batch, preds)[1] File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\nn\tasks.py", line 213, in loss return self.criterion(self.predict(batch['img']) if preds is None else preds, batch) File "C:\ProgramData\Anaconda3\envs\fiftyone\lib\site-packages\ultralytics\yolo\utils\loss.py", line 135, in call pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split( TypeError: 'NoneType' object is not iterable

I appended the same parameters from the end of this Loki4.yaml file to the coco128.yaml file :-)

# COCO128 dataset https://www.kaggle.com/ultralytics/coco128 (first 128 images from COCO train2017) by Ultralytics
# Example usage: python train.py --data coco128.yaml
# parent
# ├── yolov5
# └── datasets
#     └── coco128  ← downloads here (7 MB)
#
# Set data root in C:\Users\RICT\AppData\Roaming\Ultralytics\settings.yaml
#
#
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: Loki4           # dataset root dir
train: train/images  # train images (relative to 'path') 128 images
val: test/images     # val images (relative to 'path') 128 images
#test:               # test images (optional)

nc : 59

# Classes
names: ['test','person','person-running','person-sitting','person-lying','head','hat','shirt','pants','sunglasses','ambulance','police','metro-fire', 'country-fire','SES','bouy','ship','fishing-boat','jetty','stingray','bullshark', 'pontoon','dolphin','whale','seal','gallantry','helicopter','crowd','road-signs','truck','rubbish-truck','passenger-vehicle','sea-rescue','jet-boat','inflatable-toy','breakwater','yacht','jet-ski','irb','sled','bus','train','taxis','ferry','kayak','beach-sign','beach-flag','surfbaord','background','uav','mobile-crane','commercial-crane','slsc-vehicles','speed-boat','sail-boat','peir','tv','cone','atv']

# Hyperparameters ------------------------------------------------------------------------------------------------------
# lr0: 0.01  # initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
# lrf: 0.01  # final learning rate (lr0 * lrf)
# momentum: 0.937  # SGD momentum/Adam beta1
# weight_decay: 0.0005  # optimizer weight decay 5e-4
# warmup_epochs: 3.0  # warmup epochs (fractions ok)
# warmup_momentum: 0.8  # warmup initial momentum
# warmup_bias_lr: 0.1  # warmup initial bias lr
# box: 7.5  # box loss gain
# cls: 0.5  # cls loss gain (scale with pixels)
# dfl: 1.5  # dfl loss gain
# pose: 12.0  # pose loss gain
# kobj: 1.0  # keypoint obj loss gain
# label_smoothing: 0.0  # label smoothing (fraction)
# nbs: 64  # nominal batch size
hsv_h: 0.015  # image HSV-Hue augmentation (fraction)
hsv_s: 0.7  # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4  # image HSV-Value augmentation (fraction)
degrees: 5.0  # image rotation (+/- deg)
translate: 0.1  # image translation (+/- fraction)
scale: 0.2  # image scale (+/- gain)
shear: 0.05  # image shear (+/- deg) from -0.5 to 0.5
perspective: 0.1  # image perspective (+/- fraction), range 0-0.001
flipud: 0.3  # image flip up-down (probability)
fliplr: 0.5  # image flip left-right (probability)
mosaic: 0.3  # image mosaic (probability)
mixup: 0.1  # image mixup (probability)
# copy_paste: 0.0  # segment copy-paste (probability)

JeffOnGIT avatar Jun 19 '23 10:06 JeffOnGIT