D-FINE
D-FINE copied to clipboard
D-FINE: Redefine Regression Task of DETRs as Fine-grained Distribution Refinement [ICLR 2025 Spotlight]
Trying to run the model in an Intel iGPU, I'm trying to reduce de model to 320x320 resolution for real-time processing, but 640x640 is hardcoded in export_onnx library and if...
**Describe the bug** mAP is almost 0 when train a 5 cls dataset finetune from coco or obj2coco pretrian mode such as: CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/dfine/custom/objects365/dfine_hgnetv2_m_obj2custom-building-defeat.yml --use-amp...
Question: Reducing False Positives with Empty Images Dear friends and code authors, Thank you for sharing this codebase and ideas! I'm seeing a high rate of false positives with my...
Suppose i have process image without ground true boxes (annotation file is empty). How i can correctly calculate loss and initialize ground true values?
Hello, Thanks for your work. I was reading the D-FINE paper, and it is mentioned that D-FINE-L was trained for 72 epochs, but in the code, I found this https://github.com/Peterande/D-FINE/blob/4a1f73a8bcfac736a88abde9596d87f116d780a7/configs/dfine/dfine_hgnetv2_l_coco.yml#L34-L44...
Hi! I was able to train and fine tune on custom datasets and the performance is far better than other SOTA models (lmk if anyone needs help, more than happy...
Any idea how to automatically stop training if your validation metric (AP) doesn't improve for patience epochs, saving compute resources and preventing overfitting?
Hi there, The D-FINE model is officially integrated in the Hugging Face Transformers library 🤗 It enables easy inference as well as fine-tuning on custom data. ## Resources * Models...
does this nano model works with 640*640 img size can i give 1280*736 image size . if yes where is the parameter in config where i can change the img...
Hi all, I couldn't convert the model from .pth -> ONNX -> TRT INT8. Has anyone met the same situation? It's successful to convert the model to FP16 or FP8....