ʙᴀʙʏᴄᴏᴍᴍᴀɴᴅᴏ (JP)
ʙᴀʙʏᴄᴏᴍᴍᴀɴᴅᴏ (JP)
oh no not again
@vicgalle
I miss the old UI :/
> How did you convert the model? All I get is "llama_model_load: invalid model file './gpt4all-lora-unfiltered-quantized.bin' (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)", but trying to...
try the model with CPU, will work really fast. Also try changing the threads to 4 instead of 8.
you mean this? https://github.com/nomic-ai/gpt4all-ts Then you can implement langchain as you wish
there are a thousand ways of doing this, but you must be using the [pyllamacpp](https://github.com/nomic-ai/pyllamacpp). You can pass the parameter interactive=True, but there are better ways to do that like...
Hey Daniel, sorry for the delay! I went on a deep research on the finetuning of multimodal models, and turns out LLaVA repo already provides most of the things we...
can't wait for multimodal support!
this is very important.