GreenHandForCoding

Results 10 comments of GreenHandForCoding

Thanks! Here is my another question: I put some pictures of products without flaw and the same quantity of empty labels in training data, will it improve the performance of...

> > Thanks! Here is my another question: I put some pictures of products without flaw and the same quantity of empty labels in training data, will it improve the...

![image](https://user-images.githubusercontent.com/106043395/200733600-341e7fbc-6c15-4f29-9ab7-48d15439f5d4.png) ![image](https://user-images.githubusercontent.com/106043395/200733645-10478ef2-028f-4418-8893-a98a680e76ee.png) ![image](https://user-images.githubusercontent.com/106043395/200734003-fafbdd31-cae0-44ce-be94-b14d5f934f6e.png) Here are two graphs about the training process. I think two graphs indicate overfiting. I trained about 600 pictures. Maybe too less training data or the characteristics...

Hey! thank you and here are the two annotated images. @ExtReMLapin ![train_batch0](https://user-images.githubusercontent.com/106043395/203249497-28d4845b-a00d-4364-8517-d524f95da7c0.jpg) ![train_batch1](https://user-images.githubusercontent.com/106043395/203249513-74117c65-79e8-4d81-8632-73c33d0f176d.jpg)

Traning settings: `def parse_opt(known=False): parser = argparse.ArgumentParser() parser.add_argument('--weights', type=str, default=ROOT / 'yolov5l.pt', help='initial weights path') parser.add_argument('--cfg', type=str, default='', help='model.yaml path') parser.add_argument('--data', type=str, default=ROOT / 'data/flaw.yaml', help='dataset.yaml path') parser.add_argument('--hyp', type=str, default=ROOT...

@ExtReMLapin what? code you have seen is the one I downloaded in the official website and it's python. I'll try to use the p2 model. I will appreciate your suggestion

@ExtReMLapin Here is information form console and it's in training. By the way do you have any advice on my later training ? Such as : set rect = True...

results: ![results](https://user-images.githubusercontent.com/106043395/205185889-1ff044a8-c822-4ecd-b818-a803843a0609.png) P_curve: ![P_curve](https://user-images.githubusercontent.com/106043395/205185939-59ff551d-7025-4fd8-8a50-e59f96249b52.png) PR_curve: ![PR_curve](https://user-images.githubusercontent.com/106043395/205185944-af31ab37-9be4-4879-aa8a-916cd0f9cd42.png) R_curve: ![R_curve](https://user-images.githubusercontent.com/106043395/205185957-a1eed5bc-8103-4887-a206-bc2837fa9ca1.png) train_batch0: ![train_batch0](https://user-images.githubusercontent.com/106043395/205186002-cbf53490-e6cc-4422-a47f-e491489e7431.jpg) val_batch0: ![val_batch0_labels](https://user-images.githubusercontent.com/106043395/205186015-c774a48f-93c9-4b9c-ac06-fd225b94aceb.jpg) @ExtReMLapin I've shown you some images. Tell me if you need more. Thank you.

@ExtReMLapin OK, understand. I will set 600 epochs and see what will happen. But the precious and recall decrease sharply from 0.8+ to 0.6- in the beginning makes me conscious.

@ExtReMLapin ![results](https://user-images.githubusercontent.com/106043395/205774247-cfe49ef4-f806-48ae-959c-bdb4a49efd5f.png) The result is not good...