Add Idefics2/3 and SmolVLM Fast image processors + improvements for fast image processors
What does this PR do?
Several things added to this PR:
- Idefics2/3 + smolvlm fast image processors. Cc @andimarafioti :)
- Improvements in the base fast image processors to better handle nested images
- group_images_by_shape and reorder_images can now handle nested images, flattening them for processing then rebuilding the original nesting
- Improvements/uniformization to fast image processor tests (use torch.testing.assertclose)
- Disable grouping by default when processing on cpu, enable it on gpu for all processors. As the benchmarks below suggests, it seems that grouping images when processing on cpu is almost always slower, but almost always faster on gpu. This seems to be the case for other image processors as well.
Thanks a lot to @sushmanthreddy and @rootonchair for their PRs on idefics2/3 image processors (here and here)
Here are the results for idefics2 and idefics3/smolvlm:
Idefics2 time per images:
With different image sizes:
Idefics2 speedups:
With different image sizes:
Idefics3/SmolVLM time per images:
With different image sizes:
Idefics3/SmolVLM speedups:
With different image sizes:
Hi 👋, thank you for opening this pull request! The pull request is converted to draft by default. The CI will be paused while the PR is in draft mode. When it is ready for review, please click the Ready for review button (at the bottom of the PR page). This will assign reviewers and trigger CI.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.