TinyLLaVA_Factory
TinyLLaVA_Factory copied to clipboard
Did you train the whole LLM in the pretraining stage of share recipe?
No, vision tower was not trained.
oh,what i meant is that whether the language model was pre-trained in share training recipe, it's clear that the vision encoder was trained in pretraining of share recipe.
---Original--- From: @.> Date: Wed, Mar 20, 2024 19:10 PM To: @.>; Cc: @.@.>; Subject: Re: [DLCV-BUAA/TinyLLaVABench] Did you train the whole LLM in the pretraining stage of share recipe? (Issue #34)
No, vision tower was not trained.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Then, yes.