fasterrcnn-pytorch-training-pipeline icon indicating copy to clipboard operation
fasterrcnn-pytorch-training-pipeline copied to clipboard

Custom Image Preprocessing

Open soobin508 opened this issue 1 year ago • 36 comments

Hi, I need help in implementing my own image preprocessing techniques before training and inference. For training, I only found the "get_train_aug" consists of Albumentation, and it directly passes to "create_train_dataset". However, my techniques will be in cv2 format.

Please give me advice on where I can modify the codes. TQ!

My sample techniques will be like this: img = cv2.imread(os.path.join(img_dir, image)) lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB) _channel, a, b = cv2.split(lab)

soobin508 avatar Jan 30 '24 13:01 soobin508

Hi. The best place to make these changes would be in the datasets.py file. Also, none of the augmentations will be applied by default unless you pass the --use-train-aug argument to the train.py file.

sovit-123 avatar Jan 31 '24 10:01 sovit-123

okay thank you! I have another question. If I want to achieve high accuracy and fps for traffic sign recognition, which pretrained layer is the best? Based on my observation, resnet50 achieves high accuracy and the fps is not as expected.

soobin508 avatar Jan 31 '24 16:01 soobin508

You can try Faster RCNN MobileNet V3 model.

sovit-123 avatar Feb 01 '24 01:02 sovit-123

okay thanks! by any chance, do you know what's wrong with this? i checked online, they mentioned that its sth related to the gradient/lr and optimizer.

842 Epoch: [6] [2700/7446] eta: 0:08:01 lr: 0.001000 loss: 0.2247 (0.2497) loss_classifier: 0.0683 (0.0744) loss_box_reg: 0.1534 (0.1739) loss_objectness: 0.0001 (0.0002) loss_rpn_box_reg: 0.0011 (0.0013) time: 0.0961 data: 0.0086 max mem: 1620 843 Epoch: [6] [2800/7446] eta: 0:07:50 lr: 0.001000 loss: 0.2471 (0.2496) loss_classifier: 0.0691 (0.0745) loss_box_reg: 0.1500 (0.1736) loss_objectness: 0.0001 (0.0002) loss_rpn_box_reg: 0.0012 (0.0013) time: 0.0953 data: 0.0031 max mem: 1620 844 Loss is nan, stopping training 845 {'loss_classifier': tensor(nan, device='cuda:0', grad_fn=<NllLossBackward0>), 'loss_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>), 'loss_objectness': tensor(nan, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward0>), 'loss_rpn_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>)}

soobin508 avatar Feb 01 '24 07:02 soobin508

Try lowering the initial learning rate to 0.00001

sovit-123 avatar Feb 01 '24 09:02 sovit-123

i tried with new lr, it shows the same error..

845 Epoch: [6] [2800/7446] eta: 0:07:49 lr: 0.000010 loss: 0.3495 (0.4442) loss_classifier: 0.1362 (0.2124) loss_box_reg: 0.2011 (0.2273) loss_objectness: 0.0000 (0.0003) loss_rpn_box_reg: 0.0039 (0.0043) time: 0.0961 data: 0.0044 max mem: 1622 846 Loss is nan, stopping training 847 {'loss_classifier': tensor(nan, device='cuda:0', grad_fn=<NllLossBackward0>), 'loss_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>), 'loss_objectness': tensor(nan, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward0>), 'loss_rpn_box_reg': tensor(nan, device='cuda:0', grad_fn=<DivBackward0>)}

soobin508 avatar Feb 01 '24 12:02 soobin508

Hi, this generally does not happen. This is one of the first times I have seen this issue. Were you able to solve the issue?

sovit-123 avatar Feb 02 '24 00:02 sovit-123

Unfortunately no, but this error only happens with mobilenet. I tried w other layers with the same parameters, they all working fine.

soobin508 avatar Feb 02 '24 02:02 soobin508

Ok. May I know your PyTorch version. I will try to debug deeper this weekend. Also, if possible is there a link to your dataset that I can use for debugging?

sovit-123 avatar Feb 02 '24 03:02 sovit-123

my pytorch version is '2.1.0' and torchvision is 0.16.0. the dataset used is the GTSRB. I create the dataset by using the create xml u used your previous example.

soobin508 avatar Feb 02 '24 07:02 soobin508

hi, may I ask what kinds of hardware u used to run the inference? I have tried NVIDIA Jetson Xavier to run the onnx inference code, the FPS only reaches max of 7. The pretrained layer I used is mini darkent with nano head. Also, I used cv2.VideoCapture() for live detection, is that the factor affecting the fps?

soobin508 avatar Feb 07 '24 03:02 soobin508

I compute the FPS on the forward pass only. So, the cv2 functions won't affect it. Faster RCNN, although accurate, struggles a bit to give high FPS on edge devices. I am trying to optimize the pipeline even more.

sovit-123 avatar Feb 07 '24 06:02 sovit-123

alright thank you! Can I ask how to improve the accuracy of the training? Currently I tested out on GTSDB with mobilenet but the accuracy only 23 after 250 epochs...

soobin508 avatar Feb 07 '24 14:02 soobin508

I think you can use the Faster RCNN ResNet50 FPN V2 model. It works very well for small objects.

sovit-123 avatar Feb 07 '24 17:02 sovit-123

hi i tried with resnet fpn v2, the best mAP only reaches 51% @0.5:0.95 & 65% @0.5. May I know what else I can do to increase the accuracy? So far I tried with applying the image preprocessing techniques, but the outcome is not that significant.

soobin508 avatar Feb 19 '24 16:02 soobin508

May I know the dataset.

sovit-123 avatar Feb 19 '24 16:02 sovit-123

gtsdb

soobin508 avatar Feb 19 '24 23:02 soobin508

Have you tried with the default code in the repository without additional changes to the image processing techniques? If so, can you please let me know the result?

Also, for GTSDB, training with higher resolution images will help a lot.

sovit-123 avatar Feb 20 '24 00:02 sovit-123

yes, I tried with the default code without any additional change, the best result is 51% @0.5:0.95 & 65% @0.5. My aim is to reach accuracy of 90%+, is it possible?

Does it mean that I have to modify the resolution of the GTSDB using online tools? I just downloaded from website by default.

soobin508 avatar Feb 20 '24 07:02 soobin508

You use the --imgsz command line argument to control the image size.

sovit-123 avatar Feb 20 '24 09:02 sovit-123

besides --imgsz, is there other factor affecting the accuracy? ><

soobin508 avatar Feb 20 '24 09:02 soobin508

There are a lot actually. I will recommend going through all the command line arguments in the train.py file. Especially, the ones that handle arguments. --mosaic --use-train-aug

sovit-123 avatar Feb 20 '24 11:02 sovit-123

Hi Sovit. I would like to implement F1-score and recall as the metrics. May I ask where I can make the modification?

soobin508 avatar Feb 26 '24 10:02 soobin508

I think creating a separate metrics.py inside the utils directory will be the best approach.

sovit-123 avatar Feb 26 '24 11:02 sovit-123

does it mean that the metrics.py have to replace the evaluate in the train.py?

soobin508 avatar Feb 26 '24 13:02 soobin508

Oh, if you wish to add that to the evaluation part after training only, then you can directly modify the eval.py script in the root directory.

sovit-123 avatar Feb 26 '24 15:02 sovit-123

I tried to add torchmetrics Precision inside the eval.py. However, there are many errors associated to it.

  1. I added prec=Precision(preds, target). It said AttributeError: 'list' object has no attribute 'replace'
  2. I made a list for "labels" then convert back to Tensor. but it said AttributeError: 'Tensor' object has no attribute 'replace'

May I know how to solve this issue?

soobin508 avatar Feb 27 '24 14:02 soobin508

It is basically saying that the list and tensor data types do not have a replace attribute. I think string data types have replace but that may not be an idea data type for solving this.

sovit-123 avatar Feb 28 '24 00:02 sovit-123

What will be the best way to include the metrics? I'm using the same target and preds u have been using....

soobin508 avatar Feb 28 '24 01:02 soobin508

I need to take a look how to structure the additional changes.

sovit-123 avatar Feb 28 '24 10:02 sovit-123