OpenDAN-Personal-AI-OS icon indicating copy to clipboard operation
OpenDAN-Personal-AI-OS copied to clipboard

Custom LLM

Open carlcc opened this issue 1 year ago • 4 comments

I have already experienced Jarvis and found it very interesting.

It looks like you are researching a new self-developed LLM engine. When can we experience it? Will it be open source?

carlcc avatar Jun 06 '23 07:06 carlcc

We tested a set of open-source LLMs but found that their intelligence level is currently insufficient to support stable task execution. We will continue to monitor and test emerging LLMs. Given the pace of development in the open-source model community, we believe that intelligent models with sufficient capabilities will soon become available.

fiatrete avatar Jun 06 '23 08:06 fiatrete

Hi. I know when I play around with wizard Vicuna , etc. It seems to make pretty good output so those might be worth looking into. Also when you use llama.cpp they have integrated a new ability to load part onto GPU and part on to CPU so ive been able to run 30B+ models fairly fast. So hopefully these could run the tasks efficiently Locally. Also im sure youve seen this guy but he has lots of tricks out there. Good luck! https://youtube.com/@Aitrepreneur Also have you integrated Langchain? I feel like that could help its longterm efficiency. Thanks

Renegadesoffun avatar Jun 06 '23 22:06 Renegadesoffun

We tested a set of open-source LLMs but found that their intelligence level is currently insufficient to support stable task execution. We will continue to monitor and test emerging LLMs. Given the pace of development in the open-source model community, we believe that intelligent models with sufficient capabilities will soon become available.

Very looking forward to the open source model.

carlcc avatar Jun 07 '23 01:06 carlcc

Hi. I know when I play around with wizard Vicuna , etc. It seems to make pretty good output so those might be worth looking into. Also when you use llama.cpp they have integrated a new ability to load part onto GPU and part on to CPU so ive been able to run 30B+ models fairly fast. So hopefully these could run the tasks efficiently Locally. Also im sure youve seen this guy but he has lots of tricks out there. Good luck! https://youtube.com/@Aitrepreneur Also have you integrated Langchain? I feel like that could help its longterm efficiency. Thanks

Good news, I'm glad to hear that. A powefull open source AI agent is likely just around the corner.

carlcc avatar Jun 07 '23 01:06 carlcc