Hoang Ha
Hoang Ha
You can retry with another model. However, in my test case, I used OpenChat with a larger file (2.43MB) and it still works. OpenChat on Tinyllama paper:  Openhermes on...
The team is working on refactoring RAG for the best experience. Please stay tuned.
All bugs fixed. Ready to go
currently, phi-3 is not well supported by llama.cpp. Please stay tuned https://github.com/ggerganov/llama.cpp/pull/7225
Thanks @Propheticus for the answer. This is related to the maximum text the AI can handle. We will have a better notification to point out the maximum cap. cc @SmokeShine...
Hi @Janaifan , Thank you for using Jan. Can I ask for the whole context of the problem - What is the t/s? - Are you using the same thread...
Hi, @Janaifan, with the latest version of Jan, do you still have this problem?
Follow this PR: https://github.com/janhq/jan/pull/2670
Got the point. I think we should remove the "recommended" tag for >33B models.
@imtuyethan This issue address for the category on the Hub too. Related task: https://github.com/janhq/jan/issues/839