Fares Abawi

Results 26 comments of Fares Abawi

I was able to run 7B on two 1080 Ti (only inference). Next, I'll try 13B and 33B. It still needs refining but it works! I forked LLaMA here: https://github.com/modular-ml/wrapyfi-examples_llama...

I was able to run 7B on two 1080 Ti (only inference). Next, I'll try 13B and 33B. It still needs refining but it works! I forked LLaMA here: https://github.com/modular-ml/wrapyfi-examples_llama...

@KUANWB One workaround is to run two separate instances and let them share their weights over middleware. Seems like an overkill for single-machine distribution, but you could also distribute your...

> it's greate !! > > we want to know how to work on the larger models 13B or 65B. ? thanks ! Thanks @yokie121 ! Checkout the example in...

Had the same problem but the author solved it shortly after (just update index.js)

you can train the model without LoRA using [ImageBind-LoRA](https://github.com/fabawi/ImageBind-LoRA). Simply remove the --lora argument when calling train.py and set --full_model_checkpointing. I don't have the resources to fine-tune it but it...

> ^^^ upvoting WilTay1's question. also wondering if you know how to train ImageBind without using LoRA? @ChloeL19 you can train the model without LoRA using [ImageBind-LoRA](https://github.com/fabawi/ImageBind-LoRA). Simply remove the...

I created a simple ImageBind finetuning example using LoRA: https://github.com/fabawi/ImageBind-LoRA Make sure you clone it recursively to include the example dataset: git clone --recurse-submodules -j8 [email protected]:fabawi/ImageBind-LoRA.git Install the requirements following...

I created a simple ImageBind finetuning example using LoRA: https://github.com/fabawi/ImageBind-LoRA Make sure you clone it recursively to include the example dataset: git clone --recurse-submodules -j8 [email protected]:fabawi/ImageBind-LoRA.git Install the requirements following...

Thanks for discovering this issue. Could you provide a minimal example for me to reproduce the problem?