LittleMeow
LittleMeow
It seems to validate my idea. After running 10 epochs now, the model can only detect 'car', which appears in the pre-trained datasets, other new categories can not be detected...
> @taofuyu Do you know the difference between all_fine_tuning and prompt tuning? I'm not clear about the config file of all_fine_tuning you can compare these two files, by VSCode or...
> But, parameters of backbone, head, and neck are all frozen, and the only updated parameters 'embeddings' are not saved into hard disk (during inference, the pre-computed embedding file is...
> @taofuyu I met the same problem. But in prompt tuning on my custom dataset(10 class), I find if I write the number of prompt text less than 10, it...
I attempt to find a way out this issue thus going to learn more about OVD algorithms. In MM-grounding-DINO, it [mentions](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/dataset_prepare.md#1-coco-2017-1) that close-set fine-tuing will lose OVD generality. Maybe this...
Furthermore, it mentions that, `mix COCO data with some of the pre-trained data` will `improve performance on the COCO dataset as much as possible without compromising generalization`. My experiments demonstrate...
> Hi @taofuyu, it seems that the [configs](https://github.com/AILab-CVC/YOLO-World/blob/3264b61a03b073852b1559fa896cb12c6ff1aa41/configs/prompt_tuning_coco/yolo_world_v2_l_vlpan_bn_2e-4_80e_8gpus_mask-refine_all_fine_tuning_coco.py#L14) in `configs/prompt_tuning_coco` wrongly use `base_lr=2e-3`. It's a mistake I've made. For fine-tuning all modules, the `base_lr` should be set to `2e-4`. As...
@mio410 No @xiyangyang99 same question @wondervictor Hello, any updates on this question ?
> Can separate inference solve the problem. It comes to me that some interference between each prompts may cause the problem.@taofuyu sorry, could you please explain this in detail ?
I think, just tuning custom data with GoldG is fine. Model can detect custom categories and retain OVD ability at the same time.