Error with flowise and localAI when using Local QnA with embeddings
LocalAI version:
localai-api-1 [quay.io/go-skynet/local-ai:latest] c171b1419d1b
Environment, CPU architecture, OS, and Version:
Darwin MBP-de-Me 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:52:43 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T8112 arm64 A MacBook Pro M2 with 8G RAM
Describe the bug
When I use simply LLM Chain everything is working but when I use embeddings I get a crash of the container. I use the provided example for LocalAI Local QnA with only a simple txt file.
To Reproduce
Just send a simple prompt with the example and my simple text file
Expected behavior
I just expect something to be return by the chatbot
Logs
2023-06-15 18:47:11 [172.18.0.1]:47224 200 - POST /v1/embeddings
2023-06-15 18:47:11 2:47PM DBG Request received: {"model":"ggml-gpt4all-j","file":"","language":"","response_format":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"user","content":"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n
.....(Here the rest of my txt file) and then.....
\n\nQuestion: What's is my weeding date?\nHelpful Answer:"}],"stream":false,"echo":false,"top_p":1,"top_k":0,"temperature":0,"max_tokens":0,"n":1,"batch":0,"f16":false,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"mirostat_eta":0,"mirostat_tau":0,"mirostat":0,"seed":0,"mode":0,"step":0}
2023-06-15 18:47:11 2:47PM DBG Parameter Config: &{OpenAIRequest:{Model:ggml-gpt4all-j File: Language: ResponseFormat: Size: Prompt:
Additional context