LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

Text "<|endoftext" in every Falcon model response

Open netandreus opened this issue 2 years ago • 1 comments

MacMind

LocalAI version:

commit 8034ed3473fb1c8c6f5e3864933c442b377be52e (HEAD -> master, origin/master, origin/HEAD)
Author: Jesús Espino <[email protected]>
Date:   Sun Oct 15 09:17:41 2023 +0200

Environment, CPU architecture, OS, and Version:

  • MacOS Ventura 13.5.2 (22G91)
  • Apple Silicon M2

Describe the bug At the end of every model response there is "<|endoftext" string.

To Reproduce

  • Download model https://huggingface.co/hadongz/falcon-7b-instruct-gguf/blob/main/falcon-7b-instruct-q4_0.gguf
  • Save it to ./models/gtp-3.5-turbo (Just for example, because I use MacMind client)
  • Add file ./gpt-3.5-turbo.tmpl with this content:
You are an intelligent chatbot. Help the following question with brilliant answers.
Question: {{.Input}}
Answer:
  • Add file gpt-3.5-turbo.yaml with this content:
context_size: 2000
f16: true
gpu_layers: 1
name: gpt-3.5-turbo
parameters:
  model: gpt-3.5-turbo
  temperature: 0.9
  top_k: 40
  top_p: 0.65
  • Build using officeial localai docs for Apple Silicon
  • Start localAi with this command:
./local-ai --debug
  • Run request with curl:
(base) andrey@m2 current % curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "What is Abu-Dhabi?"}],
     "temperature": 0.9
   }'
{"created":1697527790,"object":"chat.completion","id":"9587206d-0939-4b40-8f5c-1a0695db9a5c","model":"gpt-3.5-turbo","choices":[{"index":0,"finish_reason":"stop","message":{"role":"assistant","content":" As an intelligent chatbot, I don't have a physical location, but Abu Dhabi is a city in the United Arab Emirates known for its luxurious lifestyle, beautiful beaches, and modern architecture.\u003c|endoftext|\u003e"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}%
  • There is \u003c|endoftext|\u003e at the end of string.

Expected behavior

  • Response without \u003c|endoftext|\u003e at the end of string.

Logs

(base) andrey@m2 current % ./local-ai --debug
11:29AM DBG no galleries to load
11:29AM INF Starting LocalAI using 4 threads, with models path: /Users/andrey/sandbox/local_ai/current/models
11:29AM INF LocalAI version: v1.30.0-28-g8034ed3 (8034ed3473fb1c8c6f5e3864933c442b377be52e)
11:29AM DBG Model: gpt-3.5-turbo (config: {PredictionOptions:{Model:gpt-3.5-turbo Language: N:0 TopP:0.65 TopK:40 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:true Threads:0 Debug:false Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:2000 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
11:29AM DBG Extracting backend assets files to /tmp/localai/backend_data

 ┌───────────────────────────────────────────────────┐
 │                   Fiber v2.49.2                   │
 │               http://127.0.0.1:8080               │
 │       (bound on host 0.0.0.0 and port 8080)       │
 │                                                   │
 │ Handlers ............ 71  Processes ........... 1 │
 │ Prefork ....... Disabled  PID .............. 2698 │
 └───────────────────────────────────────────────────┘

11:30AM DBG Request received:
11:30AM DBG Configuration read: &{PredictionOptions:{Model:gpt-3.5-turbo Language: N:0 TopP:0.65 TopK:40 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:true Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:2000 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
11:30AM DBG Parameters: &{PredictionOptions:{Model:gpt-3.5-turbo Language: N:0 TopP:0.65 TopK:40 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:true Threads:4 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:1 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:2000 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
11:30AM DBG Prompt (before templating): What is your Abu-Dhabi?
11:30AM DBG Template found, input modified to: You are an intelligent chatbot "Esenia". Help the following question with brilliant answers.
Question: What is your Abu-Dhabi?
Answer:
11:30AM DBG Prompt (after templating): You are an intelligent chatbot "Esenia". Help the following question with brilliant answers.
Question: What is your Abu-Dhabi?
Answer:
11:30AM DBG Loading model 'gpt-3.5-turbo' greedly from all the available backends: llama, llama-stable, gpt4all, falcon, gptneox, bert-embeddings, falcon-ggml, gptj, gpt2, dolly, mpt, replit, starcoder, bloomz, rwkv, whisper, stablediffusion, piper
11:30AM DBG [llama] Attempting to load
11:30AM DBG Loading model llama from gpt-3.5-turbo
11:30AM DBG Loading model in memory from file: /Users/andrey/sandbox/local_ai/current/models/gpt-3.5-turbo
11:30AM DBG Loading GRPC Model llama: {backendString:llama model:gpt-3.5-turbo threads:4 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0x14000382d00 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
11:30AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
11:30AM DBG GRPC Service for gpt-3.5-turbo will be running at: '127.0.0.1:49496'
11:30AM DBG GRPC Service state dir: /var/folders/f9/1b1jz83s4ysfn9zfncbsb8y40000gn/T/go-processmanager2511044327
11:30AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:49496: connect: connection refused"
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr 2023/10/17 11:30:04 gRPC Server listening at 127.0.0.1:49496
11:30AM DBG GRPC Service Ready
11:30AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo ContextSize:2000 Seed:0 NBatch:512 F16Memory:true MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:1 MainGPU: TensorSplit: Threads:4 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/Users/andrey/sandbox/local_ai/current/models/gpt-3.5-turbo Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr create_gpt_params: loading model /Users/andrey/sandbox/local_ai/current/models/gpt-3.5-turbo
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: loaded meta data with 18 key-value pairs and 196 tensors from /Users/andrey/sandbox/local_ai/current/models/gpt-3.5-turbo (version GGUF V2 (latest))
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    0:                token_embd.weight q4_0     [  4544, 65024,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    2:             blk.0.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    3:            blk.0.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    4:         blk.0.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    5:              blk.0.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    6:            blk.0.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    7:           blk.1.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    8:             blk.1.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor    9:            blk.1.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   10:         blk.1.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   11:              blk.1.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   12:            blk.1.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   13:           blk.2.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   14:             blk.2.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   15:            blk.2.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   16:         blk.2.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   17:              blk.2.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   18:            blk.2.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   19:           blk.3.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   20:             blk.3.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   21:            blk.3.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   22:         blk.3.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   23:              blk.3.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   24:            blk.3.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   25:           blk.4.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   26:             blk.4.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   27:            blk.4.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   28:         blk.4.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   29:              blk.4.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   30:            blk.4.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   31:           blk.5.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   32:             blk.5.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   33:            blk.5.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   34:         blk.5.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   35:              blk.5.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   36:            blk.5.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   37:           blk.6.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   38:             blk.6.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   39:            blk.6.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   40:         blk.6.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   41:              blk.6.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   42:            blk.6.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   43:           blk.7.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   44:             blk.7.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   45:            blk.7.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   46:         blk.7.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   47:              blk.7.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   48:            blk.7.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   49:           blk.8.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   50:             blk.8.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   51:            blk.8.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   52:         blk.8.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   53:              blk.8.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   54:            blk.8.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   55:           blk.9.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   56:             blk.9.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   57:            blk.9.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   58:         blk.9.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   59:              blk.9.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   60:            blk.9.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   61:          blk.10.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   62:            blk.10.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   63:           blk.10.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   64:        blk.10.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   65:             blk.10.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   66:           blk.10.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   67:          blk.11.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   68:            blk.11.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   69:           blk.11.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   70:        blk.11.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   71:             blk.11.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   72:           blk.11.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   73:          blk.12.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   74:            blk.12.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   75:           blk.12.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   76:        blk.12.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   77:             blk.12.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   78:           blk.12.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   79:          blk.13.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   80:            blk.13.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   81:           blk.13.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   82:        blk.13.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   83:             blk.13.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   84:           blk.13.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   85:          blk.14.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   86:            blk.14.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   87:           blk.14.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   88:        blk.14.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   89:             blk.14.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   90:           blk.14.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   91:          blk.15.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   92:            blk.15.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   93:           blk.15.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   94:        blk.15.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   95:             blk.15.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   96:           blk.15.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   97:          blk.16.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   98:            blk.16.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor   99:           blk.16.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  100:        blk.16.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  101:             blk.16.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  102:           blk.16.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  103:          blk.17.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  104:            blk.17.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  105:           blk.17.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  106:        blk.17.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  107:             blk.17.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  108:           blk.17.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  109:          blk.18.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  110:            blk.18.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  111:           blk.18.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  112:        blk.18.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  113:             blk.18.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  114:           blk.18.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  115:          blk.19.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  116:            blk.19.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  117:           blk.19.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  118:        blk.19.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  119:             blk.19.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  120:           blk.19.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  121:          blk.20.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  122:            blk.20.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  123:           blk.20.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  124:        blk.20.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  125:             blk.20.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  126:           blk.20.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  127:          blk.21.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  128:            blk.21.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  129:           blk.21.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  130:        blk.21.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  131:             blk.21.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  132:           blk.21.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  133:          blk.22.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  134:            blk.22.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  135:           blk.22.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  136:        blk.22.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  137:             blk.22.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  138:           blk.22.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  139:          blk.23.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  140:            blk.23.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  141:           blk.23.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  142:        blk.23.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  143:             blk.23.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  144:           blk.23.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  145:          blk.24.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  146:            blk.24.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  147:           blk.24.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  148:        blk.24.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  149:             blk.24.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  150:           blk.24.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  151:          blk.25.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  152:            blk.25.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  153:           blk.25.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  154:        blk.25.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  155:             blk.25.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  156:           blk.25.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  157:          blk.26.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  158:            blk.26.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  159:           blk.26.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  160:        blk.26.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  161:             blk.26.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  162:           blk.26.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  163:          blk.27.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  164:            blk.27.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  165:           blk.27.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  166:        blk.27.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  167:             blk.27.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  168:           blk.27.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  169:          blk.28.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  170:            blk.28.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  171:           blk.28.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  172:        blk.28.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  173:             blk.28.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  174:           blk.28.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  175:          blk.29.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  176:            blk.29.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  177:           blk.29.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  178:        blk.29.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  179:             blk.29.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  180:           blk.29.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  181:          blk.30.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  182:            blk.30.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  183:           blk.30.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  184:        blk.30.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  185:             blk.30.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  186:           blk.30.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  187:          blk.31.attn_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  188:            blk.31.attn_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  189:           blk.31.attn_qkv.weight q4_0     [  4544,  4672,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  190:        blk.31.attn_output.weight q4_0     [  4544,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  191:             blk.31.ffn_up.weight q4_0     [  4544, 18176,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  192:           blk.31.ffn_down.weight q4_0     [ 18176,  4544,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  193:               output_norm.weight f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  194:                 output_norm.bias f32      [  4544,     1,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - tensor  195:                    output.weight q8_0     [  4544, 65024,     1,     1 ]
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   0:                       general.architecture str
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   1:                               general.name str
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   2:                      falcon.context_length u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   3:                  falcon.tensor_data_layout str
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   4:                    falcon.embedding_length u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   5:                 falcon.feed_forward_length u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   6:                         falcon.block_count u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   7:                falcon.attention.head_count u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   8:             falcon.attention.head_count_kv u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv   9:        falcon.attention.layer_norm_epsilon f32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  10:                          general.file_type u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  11:                       tokenizer.ggml.model str
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  15:                      tokenizer.ggml.merges arr
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - kv  17:               general.quantization_version u32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - type  f32:   66 tensors
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - type q4_0:  129 tensors
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_model_loader: - type q8_0:    1 tensors
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: format         = GGUF V2 (latest)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: arch           = falcon
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: vocab type     = BPE
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_vocab        = 65024
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_merges       = 64784
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_ctx_train    = 2048
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_ctx          = 2000
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_embd         = 4544
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_head         = 71
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_head_kv      = 1
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_layer        = 32
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_rot          = 64
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_gqa          = 71
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: f_norm_eps     = 1.0e-05
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: f_norm_rms_eps = 0.0e+00
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: n_ff           = 18176
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: freq_base      = 10000.0
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: freq_scale     = 1
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: model type     = 7B
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: model ftype    = mostly Q4_0
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: model params   = 7.22 B
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: model size     = 3.92 GiB (4.66 BPW)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: general.name   = Falcon
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: BOS token = 11 '<|endoftext|>'
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: EOS token = 11 '<|endoftext|>'
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_print_meta: LF token  = 193 '
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr '
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_tensors: ggml ctx size = 4013.54 MB
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llm_load_tensors: mem required  = 4013.54 MB (+   15.62 MB per state)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr ....................................................................................
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_new_context_with_model: kv self size  =   15.62 MB
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_new_context_with_model: compute buffer total size =  358.94 MB
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_new_context_with_model: max tensor size =   299.39 MB
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr ggml_metal_add_buffer: allocated 'data            ' buffer, size =  4013.55 MB, ( 4014.05 / 73728.00)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr ggml_metal_add_buffer: allocated 'eval            ' buffer, size =     1.48 MB, ( 4015.53 / 73728.00)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr ggml_metal_add_buffer: allocated 'kv              ' buffer, size =    17.62 MB, ( 4033.16 / 73728.00)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr ggml_metal_add_buffer: allocated 'alloc           ' buffer, size =   357.48 MB, ( 4390.64 / 73728.00)
11:30AM DBG [llama] Loads OK
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_print_timings:        load time =   739.00 ms
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_print_timings:      sample time =    53.82 ms /    40 runs   (    1.35 ms per token,   743.27 tokens per second)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_print_timings: prompt eval time =   125.70 ms /    34 tokens (    3.70 ms per token,   270.47 tokens per second)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_print_timings:        eval time =   608.14 ms /    39 runs   (   15.59 ms per token,    64.13 tokens per second)
11:30AM DBG GRPC(gpt-3.5-turbo-127.0.0.1:49496): stderr llama_print_timings:       total time =   794.58 ms
11:30AM DBG Response: {"created":1697527790,"object":"chat.completion","id":"9587206d-0939-4b40-8f5c-1a0695db9a5c","model":"gpt-3.5-turbo","choices":[{"index":0,"finish_reason":"stop","message":{"role":"assistant","content":" As an intelligent chatbot, I don't have a physical location, but Abu Dhabi is a city in the United Arab Emirates known for its luxurious lifestyle, beautiful beaches, and modern architecture.\u003c|endoftext|\u003e"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
[127.0.0.1]:49495 200 - POST /v1/chat/completions

netandreus avatar Oct 17 '23 07:10 netandreus

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days.

github-actions[bot] avatar Nov 14 '25 02:11 github-actions[bot]

This issue was closed because it has been stalled for 5 days with no activity.

github-actions[bot] avatar Nov 20 '25 02:11 github-actions[bot]