Progress, but... :)
Hiya, so BigDot is working, but Dot doesn't seem to like to connect to Phi 3 properly. Where is it downloaded to, so I can connect it? At the moment I get an 'error processing your...' when I try to chat with the small Dot.
That is very interesting, I have noticed that sometimes the model installation can be corrupted if there is any sort of network content filter for whatever reason, but I will investigate further to see what the actual source of the problem is!
The default model itself is downloaded from here: https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF and is saved both for mac and windows inside a folder called Dot-Data which can be found in the Documents directory.
If Big Dot works and Doc Dot does not that might be because no files have been loaded in for Doc Dot to interact with. Otherwise some potential fixes might be resetting to default settings in the settings panel or trying a different LLM (I have found llama-3 8B to work quite nicely for RAG).
Please let me know if this helped or if you are still facing the same issue!
Hi Alexpinel/Dot,
Thanks for your response. I tried pointing at the phi 3 model, but still get the same error. Everything seems to work fine (ie it saves etc), but even after a restart the error happens immediately. It's like it's not seeing the model at all. I've also tried with other llama models and no good. V frustrating. :)
On Tue, Dec 10, 2024 at 12:48 PM alexpinel @.***> wrote:
That is very interesting, I have noticed that sometimes the model installation can be corrupted if there is any sort of network content filter for whatever reason, but I will investigate further to see what the actual source of the problem is!
The default model itself is downloaded from here: https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF and is saved both for mac and windows inside a folder called Dot-Data which can be found in the Documents directory.
If Big Dot works and Doc Dot does not that might be because no files have been loaded in for Doc Dot to interact with. Otherwise some potential fixes might be resetting to default settings in the settings panel or trying a different LLM (I have found llama-3 8B to work quite nicely for RAG).
Please let me know if this helped or if you are still facing the same issue!
— Reply to this email directly, view it on GitHub https://github.com/alexpinel/Dot/issues/25#issuecomment-2531546616, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAX43P4N3K3J7NU6JDPMMT2E3PKVAVCNFSM6AAAAABTLAVNBSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZRGU2DMNRRGY . You are receiving this because you authored the thread.Message ID: @.***>