ollama not working
I've been trying to set up a test environment in memory with ollama and it seems that there may be an issue with ollama.
ERROR (33):
typedai-dev | severity: "ERROR"
typedai-dev | message: "Error handler: _generateMessage not implemented for ollama:llama3:7b"
typedai-dev | request: {}
typedai-dev | error: {}
typedai-dev | ERROR (33):
typedai-dev | severity: "ERROR"
typedai-dev | stack_trace: "Error: _generateMessage not implemented for ollama:llama3:7b\n at OllamaLLM._generateMessage (/home/typedai/src/llm/base-llm.ts:210:9)\n at OllamaLLM.generateMessage (/home/typedai/src/llm/base-llm.ts:206:15)\n at Object.<anonymous> (/home/typedai/src/routes/chat/chat-routes.ts:122:50)\n at preHandlerCallback (/home/typedai/node_modules/fastify/lib/handleRequest.js:139:37)\n at validationCompleted (/home/typedai/node_modules/fastify/lib/handleRequest.js:123:5)\n at preValidationCallback (/home/typedai/node_modules/fastify/lib/handleRequest.js:100:5)\n at handler (/home/typedai/node_modules/fastify/lib/handleRequest.js:77:7)\n at /home/typedai/node_modules/fastify/lib/contentTypeParser.js:199:9\n at AsyncResource.runInAsyncScope (node:async_hooks:206:9)\n at done (/home/typedai/node_modules/fastify/lib/contentTypeParser.js:192:14)"
typedai-dev | message: "_generateMessage not implemented for ollama:llama3:7b"
typedai-dev | err: {
typedai-dev | "type": "Error",
typedai-dev | "message": "_generateMessage not implemented for ollama:llama3:7b",
typedai-dev | "stack":
typedai-dev | Error: _generateMessage not implemented for ollama:llama3:7b
typedai-dev | at OllamaLLM._generateMessage (/home/typedai/src/llm/base-llm.ts:210:9)
typedai-dev | at OllamaLLM.generateMessage (/home/typedai/src/llm/base-llm.ts:206:15)
typedai-dev | at Object.<anonymous> (/home/typedai/src/routes/chat/chat-routes.ts:122:50)
typedai-dev | at preHandlerCallback (/home/typedai/node_modules/fastify/lib/handleRequest.js:139:37)
typedai-dev | at validationCompleted (/home/typedai/node_modules/fastify/lib/handleRequest.js:123:5)
typedai-dev | at preValidationCallback (/home/typedai/node_modules/fastify/lib/handleRequest.js:100:5)
typedai-dev | at handler (/home/typedai/node_modules/fastify/lib/handleRequest.js:77:7)
typedai-dev | at /home/typedai/node_modules/fastify/lib/contentTypeParser.js:199:9
typedai-dev | at AsyncResource.runInAsyncScope (node:async_hooks:206:9)
typedai-dev | at done (/home/typedai/node_modules/fastify/lib/contentTypeParser.js:192:14)
Looking at the source it does seem like there is no _generateMessage function but only a _generateText function. Is the current ollama implementation broken or is there likely a config issue on my side?
my config for reference
NODE_ENV=development
LOG_LEVEL=info
# Log non-json human-readable (https://github.com/pinojs/pino-pretty?tab=readme-ov-file#pino-pretty)
LOG_PRETTY=true
PORT=3000
DATABASE_TYPE=memory
AUTH=single_user
SINGLE_USER_EMAIL=
API_BASE_URL=http://localhost:3000/api/
UI_URL=http://localhost:4200/
GCLOUD_PROJECT=fake
DATABASE_NAME=fake
# The next line is a valid ollama url just not printing mine here
OLLAMA_API_URL=http://validurl:11434
# Default human in the loop settings for the autonomous agent
# $USD
HIL_BUDGET=1
# Iterations of the agent control loop
HIL_COUNT=5
GITHUB_TOKEN=
GITHUB_ORG=
GITHUB_USER=
GOOGLE_CUSTOM_SEARCH_ENGINE_ID=
GOOGLE_CUSTOM_SEARCH_KEY=
SERP_API_KEY=
SLACK_BOT_TOKEN=
SLACK_SIGNING_SECRET=
# Ensure that your bot is invited to the channel(s) you want to listen to. This is necessary for the bot to receive events from that channel.
SLACK_CHANNELS=
SLACK_APP_TOKEN=
Hi, I just pushed some changes to main with an update to the Ollama LLM implementation. I've given a quick test locally to generate some text
Thanks I'll take a look this afternoon.
So this is progressing but still not offering the actual feature. It seems like even to use local models you must have a GCLOUD_REGION set up as the chat title uses vertex.
typedai-dev | Configuring vertex provider with fake
typedai-dev | ------------------------------------
typedai-dev | UNHANDLED PROMISE REJECTION!
typedai-dev | Reason: Error: The environment variable GCLOUD_REGION is required and was not found.
typedai-dev | at envVar (/home/typedai/src/utils/env-var.ts:14:9)
typedai-dev | at VertexLLM.provider (/home/typedai/src/llm/services/vertexai.ts:147:64)
typedai-dev | at VertexLLM.aiModel (/home/typedai/src/llm/services/ai-llm.ts:104:15)
typedai-dev | at /home/typedai/src/llm/services/ai-llm.ts:243:18
typedai-dev | at processTicksAndRejections (node:internal/process/task_queues:95:5)
typedai-dev | at async functionWithCallStack (/home/typedai/src/o11y/trace.ts:78:11)
typedai-dev | at async withActiveSpan (/home/typedai/src/o11y/trace.ts:84:22)
typedai-dev | at async VertexLLM._generateMessage (/home/typedai/src/llm/services/ai-llm.ts:166:10)
typedai-dev | Stack: Error: The environment variable GCLOUD_REGION is required and was not found.
typedai-dev | at envVar (/home/typedai/src/utils/env-var.ts:14:9)
typedai-dev | at VertexLLM.provider (/home/typedai/src/llm/services/vertexai.ts:147:64)
typedai-dev | at VertexLLM.aiModel (/home/typedai/src/llm/services/ai-llm.ts:104:15)
typedai-dev | at /home/typedai/src/llm/services/ai-llm.ts:243:18
typedai-dev | at processTicksAndRejections (node:internal/process/task_queues:95:5)
typedai-dev | at async functionWithCallStack (/home/typedai/src/o11y/trace.ts:78:11)
typedai-dev | at async withActiveSpan (/home/typedai/src/o11y/trace.ts:84:22)
typedai-dev | at async VertexLLM._generateMessage (/home/typedai/src/llm/services/ai-llm.ts:166:10)
typedai-dev | ------------------------------------
typedai-dev | (node:31) PromiseRejectionHandledWarning: Promise rejection was handled asynchronously (rejection id: 5)
typedai-dev | INFO (31):
typedai-dev | severity: "INFO"
typedai-dev | message: "LLM call Chat title using vertex:gemini-2.0-flash-lite"
typedai-dev | ERROR (31):
typedai-dev | severity: "ERROR"
typedai-dev | message: "Error handler: The environment variable GCLOUD_REGION is required and was not found."
typedai-dev | request: {}
typedai-dev | error: {}
typedai-dev | ERROR (31):
typedai-dev | severity: "ERROR"
typedai-dev | stack_trace: "Error: The environment variable GCLOUD_REGION is required and was not found.\n at envVar (/home/typedai/src/utils/env-var.ts:14:9)\n at VertexLLM.provider (/home/typedai/src/llm/services/vertexai.ts:147:64)\n at VertexLLM.aiModel (/home/typedai/src/llm/services/ai-llm.ts:104:15)\n at /home/typedai/src/llm/services/ai-llm.ts:243:18\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async functionWithCallStack (/home/typedai/src/o11y/trace.ts:78:11)\n at async withActiveSpan (/home/typedai/src/o11y/trace.ts:84:22)\n at async VertexLLM._generateMessage (/home/typedai/src/llm/services/ai-llm.ts:166:10)"
typedai-dev | message: "The environment variable GCLOUD_REGION is required and was not found."
typedai-dev | err: {
typedai-dev | "type": "Error",
typedai-dev | "message": "The environment variable GCLOUD_REGION is required and was not found.",
typedai-dev | "stack":
typedai-dev | Error: The environment variable GCLOUD_REGION is required and was not found.
typedai-dev | at envVar (/home/typedai/src/utils/env-var.ts:14:9)
typedai-dev | at VertexLLM.provider (/home/typedai/src/llm/services/vertexai.ts:147:64)
typedai-dev | at VertexLLM.aiModel (/home/typedai/src/llm/services/ai-llm.ts:104:15)
typedai-dev | at /home/typedai/src/llm/services/ai-llm.ts:243:18
typedai-dev | at processTicksAndRejections (node:internal/process/task_queues:95:5)
typedai-dev | at async functionWithCallStack (/home/typedai/src/o11y/trace.ts:78:11)
typedai-dev | at async withActiveSpan (/home/typedai/src/o11y/trace.ts:84:22)
typedai-dev | at async VertexLLM._generateMessage (/home/typedai/src/llm/services/ai-llm.ts:166:10)
typedai-dev | }
It would be desirable to be able to use only local models and not need to set any gcloud environment variables during set up. Similarly when setting up you have to provide a DATABASE_NAME even when using an inmemory database which seems unnecessary.
You'll need to update src/llm/services/defaultLlms.ts to return your LLMs defined in src/llm/services/ollama.ts
I'm doing some work on the configure scripts so I'll add in some additional checks there