LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Error with flowise and localAI when using Local QnA with embeddings

Open Oneil974 opened this issue 2 years ago • 0 comments

LocalAI version:

localai-api-1 [quay.io/go-skynet/local-ai:latest] c171b1419d1b

Environment, CPU architecture, OS, and Version:

Darwin MBP-de-Me 22.5.0 Darwin Kernel Version 22.5.0: Mon Apr 24 20:52:43 PDT 2023; root:xnu-8796.121.2~5/RELEASE_ARM64_T8112 arm64 A MacBook Pro M2 with 8G RAM

Describe the bug

When I use simply LLM Chain everything is working but when I use embeddings I get a crash of the container. I use the provided example for LocalAI Local QnA with only a simple txt file.

To Reproduce

Just send a simple prompt with the example and my simple text file

Expected behavior

I just expect something to be return by the chatbot

Logs

2023-06-15 18:47:11 [172.18.0.1]:47224 200 - POST /v1/embeddings 2023-06-15 18:47:11 2:47PM DBG Request received: {"model":"ggml-gpt4all-j","file":"","language":"","response_format":"","size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"user","content":"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n .....(Here the rest of my txt file) and then..... \n\nQuestion: What's is my weeding date?\nHelpful Answer:"}],"stream":false,"echo":false,"top_p":1,"top_k":0,"temperature":0,"max_tokens":0,"n":1,"batch":0,"f16":false,"ignore_eos":false,"repeat_penalty":0,"n_keep":0,"mirostat_eta":0,"mirostat_tau":0,"mirostat":0,"seed":0,"mode":0,"step":0} 2023-06-15 18:47:11 2:47PM DBG Parameter Config: &{OpenAIRequest:{Model:ggml-gpt4all-j File: Language: ResponseFormat: Size: Prompt: Instruction: Input: Stop: Messages:[] Stream:false Echo:false TopP:1 TopK:80 Temperature:0.9 Maxtokens:512 N:0 Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 Seed:0 Mode:0 Step:0} Name: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:512 F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Completion: Chat: Edit:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptStrings:[] InputStrings:[] InputToken:[]} 2023-06-15 18:47:11 2:47PM DBG Loading model 'ggml-gpt4all-j' greedly 2023-06-15 18:47:11 2:47PM DBG [llama] Attempting to load 2023-06-15 18:47:11 2:47PM DBG Loading model llama from ggml-gpt4all-j 2023-06-15 18:47:11 2:47PM DBG Loading model in memory from file: /models/ggml-gpt4all-j 2023-06-15 18:47:11 llama.cpp: loading model from /models/ggml-gpt4all-j 2023-06-15 18:47:29 /build/entrypoint.sh: line 11: 194 Killed ./local-ai "$@"

Additional context

Oneil974 avatar Jun 15 '23 14:06 Oneil974