Carlos A. Wong
Carlos A. Wong
* [vicuna-13b-GPTQ-4bit-128g](https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g) will only work with https://github.com/qwopqwop200/GPTQ-for-LLaMa * [Alpaca Native 4bit](https://huggingface.co/Sosaka/Alpaca-native-4bit-ggml/tree/main) need either https://github.com/antimatter15/alpaca.cpp or https://github.com/ggerganov/llama.cpp * This repo is mainly to run the gpt4all model.
Is there a way where one can save the base model and the adapter merged into one checkpoint? I think I found my answer: https://github.com/tloen/alpaca-lora/blob/main/export_state_dict_checkpoint.py
> @clxyder, with the latest main branch, you can simply do `model = model.merge_and_unload()` to get the base model with lora weights merged into it. Thank you for the answer!...
Hey @Dj1312 were you able to find a fix for this issue?
Hey @ptillet, I'm trying to debug this issue on my pascal card. I have outlined my particular case in this issue https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/142. I've swapped the following lines, note this is...
I can confirm this works as expected, are there any plans to merge this? @refarer
Can you provide steps to reproduce the issue?
What error did you see?
Hey @PigCharid, this repo is quite outdated so there may be some discrepancies with the newer libraries.
Hey @shawnmitchell hmm great question I just sort of hacked something up 😄 So I am not really sure. (Please excuse my delay)