Jon Cortelyou

Results 5 comments of Jon Cortelyou
trafficstars

> Y'all: This shouldn't be difficult. I finetuned the 30B 8-bit Llama with Alpaca Lora in about 26 hours on a couple of 3090's with good results. The 65B model...

`magnet:?xt=urn:btih:F2BAEE27E31280C630093DAF0A7F4EC16EFCC126 ` for 30B I don't know of a 65B weights file existing for alpaca but I could be wrong.

Ya, I'm seeing a repeatable crash similar to this after about 10 prompts complete. I'm using the 30B model with default parameters. This is on a PC with 128GB of...

> This should be resolved by [ggerganov#626](https://github.com/ggerganov/llama.cpp/pull/626) . https://github.com/ggerganov/llama.cpp/commit/c0bb1d3ce21005ab21d686626ba87261a6e3a660 Here is his fix in the llama.cpp code. Looks easy enough of a fix for alpaca.cpp. I believe the code bases...

I just tried merging these specific changed and it created an assert in another part of the code when the model loaded.