ChatGPTAPIFree icon indicating copy to clipboard operation
ChatGPTAPIFree copied to clipboard

Use the LLaMA model to bypass api restrictions issues

Open Moth-6 opened this issue 2 years ago • 5 comments

The LLaMA model has been leaked and many people are running it locally on their machine. Maybe we can collectively host this and create an endpoint to use when the chatgpt endpoint gets restricted or saturated here are two projects to look at from @cocktailpeanut and @ggerganov https://github.com/cocktailpeanut/dalai https://github.com/ggerganov/llama.cpp

Moth-6 avatar Mar 19 '23 23:03 Moth-6

https://github.com/antimatter15/alpaca.cpp for chat

acheong08 avatar Mar 20 '23 02:03 acheong08

But it's very computationally expensive. Even more so than gpt-3.5-turbo

acheong08 avatar Mar 20 '23 02:03 acheong08

But it's very computationally expensive. Even more so than gpt-3.5-turbo

I‘ve seen some people running it on their raspberry pi 🤔 i haven’t looked much further into it

Moth-6 avatar Mar 20 '23 10:03 Moth-6

But it's very computationally expensive. Even more so than gpt-3.5-turbo

True. I tried running it, then got a BSoD because of a lack of resources.

I‘ve seen some people running it on their raspberry pi 🤔 i haven’t looked much further into it

Same thing.

LyubomirT avatar Mar 20 '23 17:03 LyubomirT

How good is LLaMA as compared with ChatGPT?

Besides, one concern is that if more people use a centralized LLM API service, the cost will be lower compared to each individual hosting their own LLM on their own server.

ayaka14732 avatar Mar 21 '23 03:03 ayaka14732