chat-ui
chat-ui copied to clipboard
Public repository todo
Prepare the repository to be made public:
- [x] scan the repo for secrets? (I used gitleaks and seems ok)
- [ ] add "contribute" section to README
- [x] add banner/logo to README
- [x] add a license
- [x] add a description in the Github "About" section
Although it's explained already, maybe having a section that says "Run with your model" would be very helpful
Do we need a CONTRIBUTING file and maybe a CODE_OF_CONDUCT.md?
Also it seems like the .env file currently provided might not be up to date enough to run the chat locally as is?
Also it seems like the .env file currently provided might not be up to date enough to run the chat locally as is?
What's the problem? Is following the instructions from the README - adding MONGODB_URL and HF_ACCESS_TOKEN - not enough ?
First I setup my .local.env file with MONGODB_URL and HF_ACCESS_TOKEN (taken from my account settings page)
But then with this i get a {"error":"Model OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 is currently loading","estimated_time":20.0} error when trying to run the model.
- @gary149 gave me a
.env.localfile that seems to work and that's what I've been using but it overrides theMODELSvar in the.env.localtoo, and it adds anendpointkey to the model
But then with this i get a
{"error":"Model OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 is currently loading","estimated_time":20.0}error when trying to run the model.
It's a problem with inference API, it needs to warm up the model, it can take a few minutes. But it should work otherwise.
But then with this i get a
{"error":"Model OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 is currently loading","estimated_time":20.0}error when trying to run the model.It's a problem with inference API, it needs to warm up the model, it can take a few minutes. But it should work otherwise.
Mmh maybe I'm doing something wrong because it doesn't seem to warm up even after 15minutes... Do I need a specific setup on my HF account to make it work?
You can ask/report the issue in https://huggingface.slack.com/archives/C016D661PAN
~~Seems like it's all working now by following the readme instructions... Some of my issues from yesterday were probably due to things being down during the day :sweat_smile: Thanks for the help!~~
Actually it still doesn't work, I'm gonna investigate and ask on slack :/
yes @OlivierDehaene has pinned that model again on the Inference API, so i think it should work
Closing this, since the repository has been public for a while :smile: