lukemarsden

Results 157 comments of lukemarsden

Technically this should be `mistral:7b-instruct-v0.1-fp16` I think, because in Helix (with axolotl) the LoRA is finetuned on Mistral-7B v0.1, not v0.2.

if we could just proxy straight through to the ollama api, we'd automagically gain support for such goodies

in particular, two column layouts - which are common in academic papers - cause absolute mayhem and i'm surprised the model can make sense of it at all

yeah, also if we don't get any text then don't start the finetuning process - just ask the user to paste the text in instead "sorry we couldn't extract any...

I tried to train on a published notion page and it extracted no data at all, Mixtral then hallucinated loads of qapairs about photosynthesis and deep learning and other random...

As soon as it's on ollama, we will 😄

this is a tremendous idea and we should do it!

Would NATS streaming solve this with stream level persistence? I like that more because with the 3-5 second option matching up the end of the database text with the start...

Cool let's plug it in lukemarsden.net ***@***.*** @lmarsden On Sun, 4 Feb 2024 at 15:58, Karolis Rusenas ***@***.***> wrote: > yeah I think it would > > — > Reply...

Maybe there's also a quicker workaround to get the frontend to subscribe to the websocket faster when a new session is created?