Lara Wehbe
Lara Wehbe
@stweil Thank you for the quick response ! I realized a slight difference that might cause the problem also, i tried to run one of them from cmd and the...
> @yangjianxin1, Based on the code snippet you provided, it seems that you are loading the model using `AutoModelForCausalLM` with 4-bit quantization enabled. However, when attempting to merge the LORA...
> You can see a workaround here: https://github.com/substratusai/model-falcon-7b-instruct/blob/430cf5dfda02c0359122d4ef7f9b6d0c01bb3b39/src/train.ipynb > > Effectively I reload the base model in 16 bit to work around the issue. It works fine for my use...
how can i degrade the spatial resolution of the whole video ?? i am using ffmpeg on ubuntu
> Is there a way to run this model with langchain? Did you get any solution?
> Heya, I've figured it out! I took [Alpaca-LoRA's export_state_dict_checkpoint.py](https://github.com/tloen/alpaca-lora/blob/5f6614e6fc8f46933b098bc47c2c18a23047f616/export_state_dict_checkpoint.py) and adapted it a bit to fit our use case! Here's a link to my tweaked version: https://gist.github.com/botatooo/7ab9aa95eab61d1b64edc0263453230a > >...
> Load the tokenizer from the base model (llama) not from your checkpoint. So i load tokenizer from base model and base_model from checkpoint ?