s18的故事

Results 11 comments of s18的故事

> 当然,完全ok,我可以在demo里面支持一下 在image_demo里面我发现有sv.DetectionDataset( classes=texts, images=images_dict, annotations=annotations_dict ).as_yolo( annotations_directory_path=ANNOTATIONS_DIRECTORY, min_image_area_percentage=MIN_IMAGE_AREA_PERCENTAGE, max_image_area_percentage=MAX_IMAGE_AREA_PERCENTAGE, approximation_percentage=APPROXIMATION_PERCENTAGE )我制定了目录,没有报错但是只是创建了文件夹却没有保存下来annotation

> 03/13 00:47:06 - mmengine - ERROR - /usr/local/lib/python3.8/dist-packages/mmdet/evaluation/metrics/coco_metric.py - compute_metrics - 465 - The testing results of the whole dataset is empty. 想问一下你的配置文件里data_prefix=dict(img='val2017/')是什么来的呢,这个是指data/coco/val2017这个文件夹吗,我用我自己的数据集路径报错IsADirectoryError

> 这个问题还是没有解决,因此打开了一下,详细说一下,coco数据集格式,有五类, ![image](https://private-user-images.githubusercontent.com/78671643/316762563-6bef3fc3-abfa-4c18-9bfb-1f2b19aa92be.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTE0MzgyMzMsIm5iZiI6MTcxMTQzNzkzMywicGF0aCI6Ii83ODY3MTY0My8zMTY3NjI1NjMtNmJlZjNmYzMtYWJmYS00YzE4LTliZmItMWYyYjE5YWE5MmJlLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzI2VDA3MjUzM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTllNmM3ZjI1NjI0ODIwODQzZWM2Y2E2M2FiYWI4M2U3YmZiZTVkMjFlNTk0ZDQzMjU3YmJlOTg5NWYzYjc0MDQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.1Y2goS69ywOKfSENv9T7xYIhIndUqDm3w5fdBy-MWXc) > > 问题是 ![image](https://private-user-images.githubusercontent.com/78671643/316762630-a2d9e05c-b77d-429d-acd4-a0c36dd8a6ba.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTE0MzgyMzMsIm5iZiI6MTcxMTQzNzkzMywicGF0aCI6Ii83ODY3MTY0My8zMTY3NjI2MzAtYTJkOWUwNWMtYjc3ZC00MjlkLWFjZDQtYTBjMzZkZDhhNmJhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAzMjYlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMzI2VDA3MjUzM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTU0NzY2Yjg5NjM1ZDdlZjBkNjQ4ZTk0NzgyMTkzMzk3ODUzODYzNmExZTI5ZjAyNmI5ZjFkYWM1ZTk2NTNlYzYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.kNWx7CBxTAwvsmkJWW-yk-c5hBBivgoYcB8izLXGb28) > > 这两个loss从一开始就是0,loss_bbox: 0.0000 loss_dfl: 0.0000 ![IMG_1997](https://github.com/AILab-CVC/YOLO-World/assets/90667614/9a76f80c-91b1-4941-8c7e-0ab08eb5e804) 兄弟 你试一下修改metainfo,像我图片一样

> @KDgggg 兄弟 ,改了还是不行 可否给个邮箱细说? ![IMG_2001](https://github.com/AILab-CVC/YOLO-World/assets/90667614/52a90259-da85-45fc-b84e-4c0ddd0dae14) ![IMG_2002](https://github.com/AILab-CVC/YOLO-World/assets/90667614/d9160a3a-f63e-49a8-8144-63b95feb21be) ![IMG_2003](https://github.com/AILab-CVC/YOLO-World/assets/90667614/431e6813-7599-4ff8-928e-f70c624684cd) ![IMG_2004](https://github.com/AILab-CVC/YOLO-World/assets/90667614/41a4a074-b6e9-4ab1-98c5-97f792e3c747) ![IMG_2006](https://github.com/AILab-CVC/YOLO-World/assets/90667614/d2888a9e-4024-4feb-94bd-ee39eeb7ccea) 这是我的配置,我觉得可以检查一下模型结构和使用的权重是否匹配,然后标注的类别顺序是否和text的顺序一致。

> 您好,我训练过程中发现MM-Grounding DINO显存在不断增加,我清楚Randomchoice resize可能是导致这个原因,但是为何batchsize=2,memory=12866,3090的24GB显存会溢出呢。我看训练过程中GPU的memory不断增加,这是什么原因? 请问解决了吗,我是8张Tesla_V100_SXM2_32_GB,也显示显存溢出

> Hello, > > thank you for your interest in our work. Our work is a training-free approach which utilizes pre-trained foundational models to complement Mask-RCNN. > > To use...

> 我遇到了同样的问题,请问您解决了吗 > ![image](https://private-user-images.githubusercontent.com/164000816/314004041-419aa0b1-4e5f-4ba0-b39d-820fcd364bfe.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTA4NDc1NzgsIm5iZiI6MTcxMDg0NzI3OCwicGF0aCI6Ii8xNjQwMDA4MTYvMzE0MDA0MDQxLTQxOWFhMGIxLTRlNWYtNGJhMC1iMzlkLTgyMGZjZDM2NGJmZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjQwMzE5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI0MDMxOVQxMTIxMThaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mZWJjNzQ5MTA2NTQ3MWJkOGRkNTEwZTI3MjRiYmQ3YmUwY2VjMWRjYmM4YzA4NjM5NjczYmI2YTIwM2VhMzUxJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCZhY3Rvcl9pZD0wJmtleV9pZD0wJnJlcG9faWQ9MCJ9.9Lec4MqO_ytlRU6GC9-OykF6Ogx0RMKOobtWD7xFrfw) 我用的swin-l-pretrainedall的权重,就是最大精度的那个,然后配置文件需要修改,我就是没有修改配置文件的model结构。

> hi,i'm training on my custom segmantic segmentation dataset. I have 17 classes [background, obj1,.....oj16],i want the background count into loss. The args.cls_num is set to 17 or 16? In...

> Also note that since the output will go through argmax during the evaluation, i.e., `dsc_batch = dice_coeff_multi_class(pred.argmax(dim=1).cpu(), torch.squeeze(msks.long(),1).cpu().long(),args.num_cls)` the output will be of size [B,H,W] where 0=background, 1=obj1, ...,...