Prince Canuma
Prince Canuma
Yes @sihayas It seems to still be the case https://github.com/canopyai/Orpheus-TTS/issues/134#issuecomment-2792282824
New release coming tomorrow :) @Charmaineem please add the information in this issue to the upcoming docs you are working on 👌🏽
Hey, This is not an issue yet, HF is changing their processor setup in the future v4.47. All you need to do is patch the `patch_size` and `vision_feature_select_strategy` on the...
Here you go: https://huggingface.co/docs/transformers/en/model_doc/llava#usage-tips > What is the strategy for determining patch_size? It's fixed and defined during model pretraining. If you change it, the model might perform poorly or not...
I noticed it today as well I will take a look
Got it, thanks this is really helpful! Will fix it over the weekend
Yes, it was introduced here: http://github.com/Blaizzy/mlx-vlm/pull/321/files#diff-83ac80a02189338eeaf681e559f8111ce51a227f730a8b2d690be5911cf1febcR1330-R1339 I'm thinking probably best to return a dataclass instead of a tuple. Where users will need to index into .text to access the output....
Fixed it to return a object
Thanks! I have fixed the model card in the utils and will manually update all model cards.
I can't replicate this. Could you provide the full traceback?