core icon indicating copy to clipboard operation
core copied to clipboard

Integration setup fails

Open testercell opened this issue 1 year ago • 19 comments

The problem

When setting up the integration I am getting an error after typing in the address. On docker I get a 200 ok message despite the integration failing.

I do have webui running in this docker instance, could this be the cause of it as there isn't a way for HA to then login to Webui?

Logs attached

What version of Home Assistant Core has the issue?

2024.8

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant OS

Integration causing the issue

Ollama

Link to integration documentation on our website

https://www.home-assistant.io/integrations/ollama/

Diagnostics information

No response

Example YAML snippet

No response

Anything in the logs that might be useful for us?

Unexpected exception
Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 92, in async_step_user
    response = await self.client.list()
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/ollama/_client.py", line 894, in list
    return response.json()
           ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/site-packages/httpx/_models.py", line 764, in json
    return jsonlib.loads(self.content, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Additional information

No response

testercell avatar Aug 09 '24 20:08 testercell

Hey there @synesthesiam, mind taking a look at this issue as it has been labeled with an integration (ollama) you are listed as a code owner for? Thanks!

Code owner commands

Code owners of ollama can trigger bot actions by commenting:

  • @home-assistant close Closes the issue.
  • @home-assistant rename Awesome new title Renames the issue.
  • @home-assistant reopen Reopen the issue.
  • @home-assistant unassign ollama Removes the current integration label and assignees on the issue, add the integration domain after the command.
  • @home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the issue.
  • @home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.

(message by CodeOwnersMention)


ollama documentation ollama source (message by IssueLinks)

home-assistant[bot] avatar Aug 09 '24 20:08 home-assistant[bot]

@testercell Have you tried using the port of ollama instead of the port for the web-ui? (ie. Port 11434 instead of 3000)? I received the same error when I tried setting up the integration with the web-ui port

dontpanic-13 avatar Aug 12 '24 23:08 dontpanic-13

I'm having the same/similar issue. I'm unable to configure Ollama integration due to "Unable to connect" error with URL http://192.168.1.100:11434

Logger: homeassistant.components.ollama.config_flow Source: components/ollama/config_flow.py:99 integration: Ollama (documentation, issues) First occurred: 13:19:52 (3 occurrences) Last logged: 21:51:47

Unexpected exception Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 99, in async_step_user response = await self.client.list() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/ollama/_client.py", line 894, in list return response.json() ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/httpx/_models.py", line 764, in json return jsonlib.loads(self.content, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Home Assistant

Core 2024.9.1
Supervisor 2024.08.0
Operating System 13.1
Frontend 20240906.0

dead-pixelz avatar Sep 12 '24 02:09 dead-pixelz

Also cannot connect to Ollama running in a docker container.

rlust avatar Nov 19 '24 18:11 rlust

The same problem exists here.

Unexpected exception Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 99, in async_step_user response = await self.client.list() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/ollama/_client.py", line 895, in list return response.json() ^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/site-packages/httpx/_models.py", line 766, in json return jsonlib.loads(self.content, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Neso2 avatar Nov 19 '24 22:11 Neso2

I have the same issue.

Core: 2025.1.2 Supervisor: 2024.12.3

I am running Ollama Server on Seeed Computer j4012 Orin NX 16GB. Running Ollama native-not in a container. Running Open Web UI in container. OllamaServer accessible through browser on 8080-local and remote. Disabled Auth. Also set OLLAMA_HOST to IP of host port 11434 in environment variables and restarted host. I get the error below when I attempt to setup the integration using the web port-8080. I get failed to connect and no error in the logs when I attempt to setup the integration using port 11434.

Unexpected exception Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 99, in async_step_user response = await self.client.list() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 1075, in list return await self._request( ^^^^^^^^^^^^^^^^^^^^ ...<3 lines>... ) ^ File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 679, in _request return cls(**(await self._request_raw(*args, **kwargs)).json()) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/httpx/_models.py", line 766, in json return jsonlib.loads(self.content, **kwargs) ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/json/init.py", line 346, in loads return _default_decoder.decode(s) ~~~~~~~~~~~~~~~~~~~~~~~^^^ File "/usr/local/lib/python3.13/json/decoder.py", line 345, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/json/decoder.py", line 363, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Core: 2025.1.2 Supervisor: 2024.12.3

ckhyatt avatar Jan 11 '25 20:01 ckhyatt

You may need to configure Ollama to listen on all network interfaces: https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-configure-ollama-server

By default, it will only listen on 127.0.0.1

synesthesiam avatar Jan 13 '25 15:01 synesthesiam

Thanks. I really appreciate your reply. I Watched the 2 hour VPE launch and have ordered (2), hopefully for sometime soon in January. I have also been following the thread on the Jetson forums. https://forums.developer.nvidia.com/t/jetson-ai-lab-home-assistant-integration/288225/64 I recently upgraded my HA Intel NUC i7 Pentium to the Beelink EQI12 (Intel 12Gen Alderlake, Core i3)in preparation for Wyoming Protocol-Whisper/Piper/Rhasspy.

I also purchased the Seeed Computer J4012 Orin NX 16GB. My design intention is to use the native HA voice Wyoming Protocol-Whisper/Piper/Rhasspy for voice control of HA and then let the fallback feature handle general knowledge voice queries via Ollama running on the J4012 and the Ollama integration.

I have flashed the Jetson firmware follwing the wiki on Seeed Studio site. I have installed the Ollama server-native rather than container for best performace, and Open Web UI via Docker.

I have tried many different versions of what is described in the link and the Ollama Tutorial on the Nvidia site: https://www.jetson-ai-lab.com/tutorial_ollama.html

docker run -it --rm --network=host --add-host=host.docker.internal:host-gateway ghcr.io/open-webui/open-webui:main.

I have added the environment variable -e WEBUI_AUTH=False because I thought perhaps the Auth was causing the HA integration to fail.

I have tried adding "OLLAMA_HOST=0.0.0.0". I tried add -e OLLAMA_BASE_URL=http://127.0.0.1:11434 . I have tried adding these to the docker run open web ui container. I have added it to and removed it from the /etc/environment.

With adding the environment variables to the docker run I can access it via Open Web UI. The native HA Ollama integration still fails to connect as does the HACS version.

I flashed the J4012 again to try the Seeed Local AI: https://wiki.seeedstudio.com/local_ai_ssistant/. Unfortunately, step two- reComputer run ollama fails because the script is written for LT4 36.3.0 (JP6.0) and I am running 36.40 (Jp6.1)

I am obviously doing something wrong but after spending most of the weekend working on this I just can't figure it out.

I would be very appreciative for any help!

Thanks!

ckhyatt avatar Jan 13 '25 17:01 ckhyatt

Good morning @synesthesiam. To clarify, i have no issue running models locally on the J4012. My issue is configuring Open Web UI. It occurs to me that I may misunderstand how communication between the Ollama server and HA works. Is Open Web UI necessary for HA to communicate with the Ollama server? Thanks!

ckhyatt avatar Jan 14 '25 12:01 ckhyatt

You're right. Home Assistant uses Ollama (server) and doesn't use Open Web UI.

dannytsang avatar Jan 14 '25 12:01 dannytsang

Thanks! That's very helpful. How does HA communicate with the API? I presumed, obviously incorrectly, that it was HTTP.

Thanks!

On Tue, Jan 14, 2025 at 7:28 AM Danny Tsang @.***> wrote:

You're right. Home Assistant uses Ollama (server) and doesn't use Open Web UI.

— Reply to this email directly, view it on GitHub https://github.com/home-assistant/core/issues/123492#issuecomment-2589785463, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2YJ753QOYF5Q6RCX57KUT2KT7FHAVCNFSM6AAAAABMJC4T36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOBZG44DKNBWGM . You are receiving this because you commented.Message ID: @.***>

ckhyatt avatar Jan 14 '25 12:01 ckhyatt

@dannytsang thanks for the helpful comment earlier! I flashed it to 6.1 again and installed Ollama server and modified the systemd service and the integration setup now connects! The integration configuration selects Ollama 3.2 as the model. The docs for the integration suggest using 3.1:8b. Is 3.2 now the recommended? I presume this is the 3.2 8b version? Thanks!

ckhyatt avatar Jan 14 '25 14:01 ckhyatt

I'm glad it's working.

I'm not sure what is recommended from home assistant.

Each model is different and have different hardware requirements, trained data, etc. Have a look on here for the models that are in the Ollama library: https://ollama.com/look around the Internet/try them out. This is where having Open Web UI to manage your models is handy if you don't want to use command line to do it. You can use web UI and HA running at the same time if your hardware is up to it.

My own personal example, llava has vision built in so I can send images to that model from cameras. Llama3.3 is too big for my hardware so I use 3.2 for Home Assistant Assist.

dannytsang avatar Jan 14 '25 20:01 dannytsang

Makes sense. Thanks for your help!

On Tue, Jan 14, 2025 at 3:12 PM Danny Tsang @.***> wrote:

I'm glad it's working.

I'm not sure what is recommended from home assistant.

Each model is different and have different hardware requirements, trained data, etc. Have a look on here for the models that are in the Ollama library: https://ollama.com/look around the Internet/try them out. This is where having Open Web UI to manage your models is handy if you don't want to use command line to do it. You can use web UI and HA running at the same time if your hardware is up to it.

My own personal example, llava has vision built in so I can send images to that model from cameras. Llama3.3 is too big for my hardware so I use 3.2 for Home Assistant Assist.

— Reply to this email directly, view it on GitHub https://github.com/home-assistant/core/issues/123492#issuecomment-2591014489, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2YJ77DHEH55JKX2Z4QCC32KVVUPAVCNFSM6AAAAABMJC4T36VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKOJRGAYTINBYHE . You are receiving this because you commented.Message ID: @.***>

ckhyatt avatar Jan 14 '25 20:01 ckhyatt

Is anyone using Fallback, introduce in 2024,12.x? When I setup the Ollama integration it's enabled by default. However, when I ask Assist a general knowledge question it doesn't fallback to Ollama.

Image

ckhyatt avatar Jan 14 '25 21:01 ckhyatt

I am experiencing nearly the same issue as the original post,

Logger: homeassistant.components.ollama.config_flow
Source: components/ollama/config_flow.py:99
integration: ollama ([documentation](https://www.home-assistant.io/integrations/ollama), [issues](https://github.com/home-assistant/core/issues?q=is%3Aissue+is%3Aopen+label%3A%22integration%3A+ollama%22))
First occurred: 10:42:53 AM (1 occurrences)
Last logged: 10:42:53 AM
Unexpected exception

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 99, in async_step_user
    response = await self.client.list()
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 1079, in list
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 682, in _request
    return cls(**(await self._request_raw(*args, **kwargs)).json())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 628, in _request_raw
    raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download

I believe Ollama is configured correctly, I am able to reach it via my Open Web UI via the machine's IP at :11434, Home Assistant does not seem able to connect to it.

ryanfiller avatar Mar 20 '25 15:03 ryanfiller

I am experiencing nearly the same issue as the original post,

Logger: homeassistant.components.ollama.config_flow
Source: components/ollama/config_flow.py:99
integration: ollama ([documentation](https://www.home-assistant.io/integrations/ollama), [issues](https://github.com/home-assistant/core/issues?q=is%3Aissue+is%3Aopen+label%3A%22integration%3A+ollama%22))
First occurred: 10:42:53 AM (1 occurrences)
Last logged: 10:42:53 AM
Unexpected exception

Traceback (most recent call last):
  File "/usr/src/homeassistant/homeassistant/components/ollama/config_flow.py", line 99, in async_step_user
    response = await self.client.list()
               ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 1079, in list
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
    ...<3 lines>...
    )
    ^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 682, in _request
    return cls(**(await self._request_raw(*args, **kwargs)).json())
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.13/site-packages/ollama/_client.py", line 628, in _request_raw
    raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download

I believe Ollama is configured correctly, I am able to reach it via my Open Web UI via the machine's IP at :11434, Home Assistant does not seem able to connect to it.

hey @ryanfiller, I solved my issue by allowing ollama (running as a service in unbuntu) to listen on 0.0.0.0 instead of 127.0.0.1. You can edit this with "sudo systemctl edit ollama" and change "Enviroment" to equal 0.0.0.0. I am running openwebui and homeassistant on different servers than ollama, and this solved my issue.

[Service] Environment="OLLAMA_HOST=0.0.0.0"

dead-pixelz avatar Mar 20 '25 22:03 dead-pixelz

@dead-pixelz — Thanks for the reply. As far as I can tell I already have Environment="OLLAMA_HOST=0.0.0.0" correctly set on the machine where Ollama is running. I am also running Open Web UI on a different machine than Ollama, http://<ip>:11434 works for Open Web UI to connect to it, from that same computer I am also running Home Assistant but it doesn't seem to be able to reach it via the same url.

ryanfiller avatar Mar 21 '25 14:03 ryanfiller

I'm in a similar boat with using ollama with open webui - I prefer not to expose ollama since it is unauthenticated. The nice thing is that open webui has a built in authenticated proxy for ollama, but this integration doesn't support passing in the bearer token. I think it would be a relatively simple addition for anyone who is familiar with the hass codebase.

Making a request through the authenticated open webui endpoint is as simple as:

curl -X 'GET' \
  'https://your.ollama.com/ollama/v1/models' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'

So to support open webui, an optional bearer config field would need to be added, and if set, it would need to add Authorization: Bearer xxxx as well as respect the /ollama/ path when provided as ollama URL (hopefully that's already the case so people can reverse proxy ollama's api under arbitrary subpaths).

stephen304 avatar May 12 '25 20:05 stephen304

There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest Home Assistant version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.