D-FINE is now available in 🤗 Transformers
Hi there,
The D-FINE model is officially integrated in the Hugging Face Transformers library 🤗
It enables easy inference as well as fine-tuning on custom data.
Resources
- Models (as well as demo): https://huggingface.co/collections/ustc-community/d-fine-68109b427cbe6ee36b4e7352
- Docs: https://huggingface.co/docs/transformers/main/en/model_doc/d_fine
- Inference notebook: https://github.com/qubvel/transformers-notebooks/blob/main/notebooks/DFine_inference.ipynb
- Fine-tuning notebook: https://github.com/qubvel/transformers-notebooks/blob/main/notebooks/DFine_finetune_on_a_custom_dataset.ipynb
Happy fine-tuning!
Deployment
The model can be easily exported to ONNX using 🤗 Optimum (relevant for https://github.com/Peterande/D-FINE/issues/204, https://github.com/Peterande/D-FINE/issues/258 and https://github.com/Peterande/D-FINE/issues/268). See here for a guide: https://huggingface.co/blog/convert-transformers-to-onnx.
Other related issues
#8 https://github.com/Peterande/D-FINE/issues/214 #125
Hi, is quantization supported using Hugging Face quantization libraries like Optimum, Quanto, or others?
Hi,
Yes quantization should work out-of-the-box. cc @qubvel
@NielsRogge I had several issues with this model, run 20/30 runs but was never able to get good results. I am not sure but the notebook, as it is, it doesn't run (albumentation complains a lot, not a great library imho)
Kinda of wondering if someone else has trained it on their dataset (maybe not as similar to coco), also the notebook talks about coco format but there is some confusion in the comments, can we confirm 100% it uses coco format?
Thanks a lot folks <3
Thanks, will ping the team on this
Thanks, will ping the team on this
Hey Niels, this guy made an amazing repo and it is working like a charm, also it trains really fast -> https://github.com/ArgoHA/custom_d_fine
hope it helps
@FrancescoSaverioZuppichini looks like this might be the cause: https://github.com/huggingface/transformers/issues/40253