torchtune icon indicating copy to clipboard operation
torchtune copied to clipboard

Vision/Multimodal

Open bhack opened this issue 2 years ago • 22 comments

With all the growing activity and focus on multimodal models is this library restricted to tune text only LLM? Do we plan to have Vision or more in general multimodal models tuning support?

bhack avatar Apr 18 '24 12:04 bhack

Hi @bhack thanks for the question! We haven't added any multimodal models yet as we are working to get good coverage of text-only methods first, but it's definitely something we are considering for the future. Out of curiosity, are there any multimodal models or techniques you'd be interested in seeing specifically?

ebsmothers avatar Apr 18 '24 13:04 ebsmothers

There was a recent and interesting survey at: https://github.com/UbiquitousLearning/Efficient_Foundation_Model_Survey

bhack avatar Apr 18 '24 14:04 bhack

About my personal preference I would like to effectively fine-tuning with a support library like this one models like (or similar): https://github.com/FoundationVision/GLEE

bhack avatar Apr 18 '24 14:04 bhack

Moondream2 would be a good one to begin with! It uses Phi-2 and forked siglip as a projector

matbeedotcom avatar Apr 23 '24 06:04 matbeedotcom

Thanks @bhack and @matbee-eth for the suggestions! At this exact moment we do not have the bandwidth to take these on but we will keep them both in mind for the near future (re Moondream2 we are currently working on adding Phi-3). In the meantime, please let us know if either of you would be willing to contribute on this front.

ebsmothers avatar May 01 '24 03:05 ebsmothers

Thanks @bhack and @matbee-eth for the suggestions! At this exact moment we do not have the bandwidth to take these on but we will keep them both in mind for the near future (re Moondream2 we are currently working on adding Phi-3). In the meantime, please let us know if either of you would be willing to contribute on this front.

Do you know of any PR's that cover end-to-end implementation details for doing such a thing? Just to assess whether it takes novel-work or if its just conforming to some sort of protocol/design.

matbeedotcom avatar May 01 '24 18:05 matbeedotcom

Also fine-tuning other Meta foundational models would be nice like recent https://github.com/facebookresearch/segment-anything-2

bhack avatar Jul 30 '24 17:07 bhack

Also fine-tuning other Meta foundational models would be nice like recent facebookresearch/segment-anything-2

Thanks for the input! We're still a small team so working hard to provide great memory savings and performance for LLMs first, but this is 100% on our radar.

Just out of curiosity - what kind of finetuning would you want to do with SAM2? Do you have any hard data or HW constraints?

joecummings avatar Jul 30 '24 18:07 joecummings

Yes generally it could be hard mining cases, highres, HW constrains etc.. So I think that also in Vision we really have the same type of fine-tuning needs. I really hope that we could share some common infra/components between LLM, Vision and Multimodal without building 3 different frameworks. But this will really depend on how well torctune could abstract some concepts. Also if I understand you are prioritizing LLM I think that a Multimodal/Vision model could be useful as an early Canary test to lowering the risk of a more heavy refractory on a later stage. I think that you can ask the collaboration of some Vision/multimodal teams internally to create more critical mass around the project.

bhack avatar Jul 30 '24 18:07 bhack

E.g. see how many comments we had on the original SAM just related to fine-tuning: https://github.com/facebookresearch/segment-anything/issues/5

bhack avatar Jul 30 '24 19:07 bhack

Also just to make another example. Your WIP RLHF with PPO https://github.com/pytorch/torchtune/pull/1005 or other approaches like that could be still useful in Vision/Multimodal https://encord.com/blog/guide-to-rlhf/

So I think this is why it is important to have some canary tests on other domains to better validate the design.

bhack avatar Jul 30 '24 19:07 bhack