alpaca-lora
alpaca-lora copied to clipboard
trained LoRA on the WizardLM dataset
I trained a LoRA on the WizardLM(https://github.com/nlpxucan/WizardLM) dataset.
You can check it out here, https://huggingface.co/winddude/wizardLM-LlaMA-LoRA-7B
Wasn't sure if I should do a pull request and add it to the readme as it's not alpaca. Let me know...
Can you do a 13b the full 7b is already released ;o
I'll take a look, but I need to get a better handle on the hyperparams first. Plus I'm also looking at merging and cleaning a number of the instruct datasets, before another run.
I'll take a look, but I need to get a better handle on the hyperparams first. Plus I'm also looking at merging and cleaning a number of the instruct datasets, before another run.
Right now I'm working on using gpt3.5-turbo to clean this dataset of "As a large language model" stuff. I'll lyk and share it if I can get it done