AutoKuma icon indicating copy to clipboard operation
AutoKuma copied to clipboard

Uptime Kuma v2 support

Open BigBoot opened this issue 1 year ago • 11 comments

This issues is meant to keep track of support for the upcoming version 2.0 of uptime-kuma, currently available as a beta.

Since the api of v1 and v2 have api incompatibilities there's a separate version of autokuma for v2. This is made available as docker tags with the prefix uptime-kuma-v2- e.g. to get the latest dev version with v2 support use

docker pull ghcr.io/bigboot/autokuma:uptime-kuma-v2-master

to pin a specific commit:

docker pull ghcr.io/bigboot/autokuma:uptime-kuma-v2-sha-23287bc

For source builds v2 support can be enabled using a feature flag, i.e.:

cargo install --git https://github.com/BigBoot/AutoKuma.git --features uptime-kuma-v2 kuma-cli
Status Notes
Monitor
Docker Host
Notification ⚠️* Same issues as v1
Status Page Untested
Maintenance Untested
Tags There currently seems to be some problems with assigning tags/values to monitors in V2, this also happens using the UI

BigBoot avatar Nov 14 '24 18:11 BigBoot

This works perfectly. Thank you for implementing it so quickly.

barcar avatar Nov 15 '24 20:11 barcar

One thing I noticed with my current config is when I did a fresh compose up -d on uptime kuma itself, autokuma created a second copy of the monitors I currently had instead of detecting them. They were created both times by Autokuma. Is this related to the tagging issue?

image

I've also noted a lot of this:


WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided
WARN [kuma_client::util] The server rejected the login: Too frequently, try again later.
WARN [kuma_client::util] Error while handling 'Info' event: The server rejected the login: Too frequently, try again later.
WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was pro

Compose:

networks:
  web-proxy:
    name: ${PROXY_NETWORK}
    external: true
  app-bridge:
    name: ${APP_NETWORK}
    external: true
  socket-proxy:
    name: ${SOCKET_NETWORK}
    external: true
  gluetun-bridge:
    name: ${GLUETUN_NETWORK}
    external: true

services:
  uptime-kuma:
    image: louislam/uptime-kuma:beta-slim
    container_name: uptime-kuma
    restart: unless-stopped
    profiles: ["all","kuma"]
    networks:
      - ${PROXY_NETWORK}
      - ${APP_NETWORK}
      - ${SOCKET_NETWORK}
      - ${GLUETUN_NETWORK}
    depends_on:
      - uptime-kuma-db
    deploy:
      resources:
        limits:
          memory: 512M
    volumes:
      - ${APPDATA_DIR}/uptime-kuma/data:/app/data
    environment:
      PUID: ${PUID}
      PGID: ${PGID}
    labels:
      logging.promtail: true
      traefik.enable: true
      traefik.external.cname: true
      traefik.docker.network: ${PROXY_NETWORK}
      traefik.http.routers.uptime-kuma.entrypoints: https
      traefik.http.routers.uptime-kuma.rule: Host(`${SUBDOMAIN_UPTIME_KUMA}.${DOMAINNAME}`)
      traefik.http.routers.uptime-kuma.middlewares: chain-private@file
      #kuma.__app: '{ "name": "Uptime-Kuma", "type": "web-group", "url": "https://${SUBDOMAIN_UPTIME_KUMA}.${DOMAINNAME}", "internal_port": "3001" }'

  uptime-kuma-db:
    image: lscr.io/linuxserver/mariadb:latest
    container_name: uptime-kuma-db
    restart: always
    profiles: ["all","kuma"]
    networks:
      - ${APP_NETWORK}
    volumes:
      - ${APPDATA_DIR}/uptime-kuma/db:/config
    environment:
      TZ: ${TZ}
      PUID: ${PUID}
      PGID: ${PGID}
      MYSQL_ROOT_PASSWORD: ${UPTIME_KUMA_MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${UPTIME_KUMA_MYSQL_DB}
      MYSQL_USER: ${UPTIME_KUMA_MYSQL_USER}
      MYSQL_PASSWORD: ${UPTIME_KUMA_MYSQL_PASSWORD}
    labels:
      logging.promtail: true
      #kuma.__app: '{ "name": "Uptime-Kuma MySQL", "type": "mysql", "service": "Uptime-Kuma", "db_url": "mysql://${UPTIME_KUMA_MYSQL_USER}:${UPTIME_KUMA_MYSQL_PASSWORD}@uptime-kuma-db:3306" }'

  autokuma:
    image: ghcr.io/bigboot/autokuma:master
    container_name: autokuma
    restart: unless-stopped
    profiles: ["all","kuma"]
    networks:
      - ${SOCKET_NETWORK}
    depends_on:
      - uptime-kuma
    environment:
      AUTOKUMA__KUMA__URL: http://uptime-kuma:3001
      AUTOKUMA__KUMA__USERNAME: ${KUMA_USER}
      AUTOKUMA__KUMA__PASSWORD: ${KUMA_PASSWORD}
      AUTOKUMA__TAG_NAME: AutoKuma
      AUTOKUMA__DEFAULT_SETTINGS: |- 
        *.notification_id_list: { "1": true }
      AUTOKUMA__ON_DELETE: delete
      AUTOKUMA__SNIPPETS__APP: |-
        {# Assign the first snippet arg for readability #}
        {% set args = args[0] %}

        {# Generate IDs with slugify #}
        {% set id = args.name | slugify %}
        {% if args.service %}
          {% set service_id = args.service | slugify %}
        {% endif %}

        {# Define the top level services/app naming conventions #}
        {% if args.type == "web" %}
          {{ id }}-group.group.name: {{ args.name }}
        {% elif args.type == "web-group" %}
          {{ id }}-group.group.name: {{ args.name }}
          {{ id }}-svc-group.group.parent_name: {{ id }}-group
          {{ id }}-svc-group.group.name: {{ args.name }} App
        {% elif service_id is defined and args.type in ["redis", "mysql", "postgres", "web-support"] %}
          {{ id }}-svc-group.group.parent_name: {{ service_id }}-group
          {{ id }}-svc-group.group.name: {{ args.name }}{% if args.type == "web-support" %} App{% endif %}
        {% endif %}

        {# Web containers get http & https checks #}
        {% if args.type in ["web-group", "web", "web-support"] %}
          {% if args.type == "web" %}
            {% set parent = id ~ "-group" %}
          {% else %}
            {% set parent = id ~ "-svc-group" %}
          {% endif %}
          {{ id }}-https.http.parent_name: {{ parent }}
          {{ id }}-https.http.name: {{ args.name }} (Web)
          {{ id }}-https.http.url: {{ args.url }}
          {{ id }}-http.http.parent_name: {{ parent }}
          {{ id }}-http.http.name: {{ args.name }} (Internal)
          {% if args.network and args.network == "host" %}
            {{ id }}-http.http.url: http://10.0.20.15:{{ args.internal_port }}
          {% elif args.network and args.network == "vpn" %}
            {{ id }}-http.http.url: http://{{ container_name }}-vpn:{{ args.internal_port }}
          {% else %}
            {{ id }}-http.http.url: http://{{ container_name }}:{{ args.internal_port }}
          {% endif %}
          {# Check for authentication and set basic auth details #}
          {% if args.auth and args.auth == "basic" %}
            {{ id }}-http.http.authMethod: {{ args.auth }}
            {{ id }}-http.http.basic_auth_user: {{ args.auth_user }}
            {{ id }}-http.http.basic_auth_pass: {{ args.auth_pass }}
            {{ id }}-https.http.authMethod: {{ args.auth }}
            {{ id }}-https.http.basic_auth_user: {{ args.auth_user }}
            {{ id }}-https.http.basic_auth_pass: {{ args.auth_pass }}
          {% endif %}
        {% endif %}

        {# Database containers get db specific checks #}
        {% if args.type in ["redis", "mysql", "postgres"] %}
          {{ id }}-db.{{ args.type }}.name: {{ args.name }} (DB)
          {{ id }}-db.{{ args.type }}.parent_name: {{ id }}-svc-group
          {{ id }}-db.{{ args.type }}.database_connection_string: {{ args.db_url }}
        {% endif %}

        {# All containers get a container check #}
        {% if args.type == "web" %}
          {% set parent_name = id ~ "-group" %}
          {{ id }}-container.docker.parent_name: {{ parent_name }}
        {% elif args.type not in ["solo", "support"] %}
          {% set parent_name = id ~ "-svc-group" %}
          {{ id }}-container.docker.parent_name: {{ parent_name }}
        {% endif %}
        {% if args.type == "support" %}
          {{ id }}-container.docker.parent_name: {{ service_id }}-group
        {% endif %}
        {% if args.type in ["solo", "support"] %}
          {{ id }}-container.docker.name: {{ args.name }}
        {% else %}
          {{ id }}-container.docker.name: {{ args.name }} (Container)
        {% endif %}
        {{ id }}-container.docker.docker_container: {{ container_name }}
        {{ id }}-container.docker.docker_host: 1
      DOCKER_HOST: http://socket-proxy:2375
    labels:
      logging.promtail: true
      #kuma.__app: '{ "name": "AutoKuma", "type": "support", "service": "Uptime-Kuma" }'

undaunt avatar Nov 15 '24 21:11 undaunt

  1. i haven't tested an upgrade yet, maybe uptime kuma recreates it's database tables on the 2.0 migration? That would result in a change of ids and therefore AutoKuma losing it's associations.
  2. Yep seems like the rate-limiting got hardened in 2.0, I may try going back to a long lived connection instead of reconnecting for every sync, I initially switched to this approach because the SocketIO library I use wasn't to reliable at reconnecting, but this seems to have improved in the meantime. For a short term fix, increasing the sync interval should work. Something like AUTOKUMA__SYNC_INTERVAL="30.0".

BigBoot avatar Nov 16 '24 15:11 BigBoot

Thanks, I just bumped up the sync interval.

Re: the first point, this is a net new test setup with only those containers. I just brought the stacks up, down, then up again. There isn't an easy migration path from v1 to v2 (if doing SQLite to MariaDB) with current historical data, they're basically stating due to bandwidth reasons they won't officially support it, but others have posted info on how to create a mysql database and populate it with converted sqlite data via an export.

undaunt avatar Nov 18 '24 17:11 undaunt

Hi! The latest update seems to have broken autokuma for ukv2. I recreated my uptimekuma server but ak can't create any monitors. It throws out the following logs repeatedly:

autokuma-1  | WARN [kuma_client::util] [backups-nasty.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-other_devices.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-vivo.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-self.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-asus_viki.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-db.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [autokuma::entity] Cannot create monitor uptime-kuma because referenced monitor with services is not found
autokuma-1  | WARN [kuma_client::util] Error while parsing uptime-kuma-auto-kuma: data did not match any variant of untagged enum EntityWrapper!
autokuma-1  | WARN [autokuma::sync] Encountered error during sync: Error while trying to parse labels: data did not match any variant of untagged enum EntityWrapper

The referenced groups are supposed to be created by ak too. For the enum problem I haven't had time to actually debug the code, but the latest uk update did have a pr merged for monitor tags: https://github.com/louislam/uptime-kuma/pull/5298 I didn't change any of the already working monitor definitions and am running the images louislam/uptime-kuma:beta (sha256:752118f891ea991180124e3fc7edbc1865a58cb03e15e612ecbc68065b1d4b9f) and ghcr.io/bigboot/autokuma:uptime-kuma-v2-master (sha256:74bccf145554cce2acf63676d4b98fafdf1e710e60150733fcac8b5b1c364301)

Thanks for the help and all the good work you do!

bnctth avatar Dec 22 '24 12:12 bnctth

Hi @bnctth, I don't think there's any breaking change, this looks rather like a problem with your labels, let's try to break this down.

autokuma-1  | WARN [kuma_client::util] [backups-nasty.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-other_devices.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-vivo.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-self.toml] No monitor named backups-nasty could be found
autokuma-1  | WARN [kuma_client::util] [backups-asus_viki.toml] No monitor named backups could be found
autokuma-1  | WARN [kuma_client::util] [backups-nasty-db.toml] No monitor named backups-nasty could be found

These are "expected" when creating nested setups, autokuma knows about these dependencies but they have not been created yet, as a result they are skipped till later.

autokuma-1  | WARN [autokuma::entity] Cannot create monitor uptime-kuma because referenced monitor with services is not found

this one says that you have a monitor referencing a parent monitor with the autokuma id "services", however there doesn't seem to exist any such monitor definition.

autokuma-1  | WARN [kuma_client::util] Error while parsing uptime-kuma-auto-kuma: data did not match any variant of untagged enum EntityWrapper!
autokuma-1  | WARN [autokuma::sync] Encountered error during sync: Error while trying to parse labels: data did not match any variant of untagged enum EntityWrapper

This error unfortunately isn't as clear, but it basically means you have a definition (uptime-kuma-auto-kuma) with a missing or invalid "type".

BigBoot avatar Dec 22 '24 17:12 BigBoot

@BigBoot thanks for your reply! Turn out I had a pretty trivial problem mixed with some red herring-y error messages: I had a typo in a snippet (notificationIDList instead of `notificationIdList˙, small d in Id), so I had the correct definition for the parents, but because of the error it never got to their definition.

bnctth avatar Dec 24 '24 22:12 bnctth

Just noticed I'm getting this error over and over in my logs, not sure how long it's been going for. Any ideas?

Invalid config: missing fieldfiles

mlamoure avatar Jan 23 '25 00:01 mlamoure

Sorry I added the files.follow_symlinks option and forgot to set a default, should be working now

BigBoot avatar Jan 23 '25 06:01 BigBoot

Sorry I added the files.follow_symlinks option and forgot to set a default, should be working now

That's why it's a beta. Nice work on this app. My Uptime Kuma is nearly 100% config driven. Two comments / questions while I have you:

1- I have multiple remote docker hosts. I think I notice that if a host is down, AutoKuma gets "stuck" waiting for that node to come up, rather than ignoring and moving on. If you add labels to a docker host that is up while another is down, the new labels won't get applied until all nodes are up again.

2- An ask of mine would be more verbose messaging where a mistake is made -- which docker host, which stack/service it's related to, etc.

mlamoure avatar Jan 23 '25 12:01 mlamoure

I'm most likely doing something wrong on my end with uptime-kuma v2. i see this in the autokuma log:

uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |                             _           _  __                         
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |               /\           | |         | |/ /                         
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |              /  \    _   _ | |_   ___  | ' /  _   _  _ __ ___    __ _ 
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |             / /\ \  | | | || __| / _ \ |  <  | | | || '_ ` _ \  / _` |
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |            / ____ \ | |_| || |_ | (_) || . \ | |_| || | | | | || (_| |
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |           /_/    \_\ \__,_| \__| \___/ |_|\_\ \__,_||_| |_| |_| \__,_|  
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    |                                                        v0.8.0-3236fb93
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | ERROR [kuma_client::util] Error during connect
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::sync] Encountered error during sync: Timeout while trying to connect to Uptime Kuma server
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie-ext because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | INFO [autokuma::sync] Creating new Monitor: group-mealie
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::sync] Encountered error during sync: Timeout while trying to call 'editMonitor'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie-ext because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | INFO [autokuma::sync] Creating new Monitor: group-mealie
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::sync] Encountered error during sync: Timeout while trying to call 'add'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [autokuma::entity] Cannot create monitor mealie-ext because referenced monitor with name group-mealie is not found
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | INFO [autokuma::sync] Creating new Monitor: group-mealie
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Timeout while trying to call 'login'.
uptime-kuma_autokuma.1.tghrnk44yzdbkmxt0072jl0te@swarm-mgr01    | WARN [kuma_client::util] Error while handling 'Info' event: Timeout while trying to call 'login'.

The Uptime-Kuma compose is relatively simple:

---
version: '3.9'

services:
  uptime-kuma:
    image: louislam/uptime-kuma:beta
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 2s
        max_attempts: 5
      labels:
        # Enable Traefik for this service
        - "traefik.enable=true"
        ## HTTP Routers - internal
        - "traefik.http.routers.rtr-uptime-kuma.rule=Host(`monitor.${PRIMARY_DOMAIN}`)"
        - "traefik.http.routers.rtr-uptime-kuma.entrypoints=websecure"
        - "traefik.http.routers.rtr-uptime-kuma.service=svc-uptime-kuma"
        - "traefik.http.routers.rtr-uptime-kuma.priority=10"
        ## Middlewares
        - "traefik.http.routers.rtr-uptime-kuma.middlewares=global-forwardauth-authentik@file"
        ## HTTP Services 
        - "traefik.http.services.svc-uptime-kuma.loadbalancer.server.port=3001"
        ## UPTIME KUMA Labels
        # Kuma groups for organization
        - "KUMA.group-infrastructure.group.name=Infrastructure"
        - "KUMA.group-applications.group.name=Applications"

        - "KUMA.group-uptime-kuma.group.parent_name=group-infrastructure"
        - "KUMA.group-uptime-kuma.group.name=Uptime-Kuma"
        - "KUMA.uptime-kuma.http.parent_name=group-uptime-kuma"
        - "KUMA.uptime-kuma.http.name=Uptime Kuma Internal"
        - "KUMA.uptime-kuma.http.url=http://uptime-kuma:3001"
        - "KUMA.uptime-kuma-ext.http.parent_name=group-uptime-kuma"
        - "KUMA.uptime-kuma-ext.http.name=Uptime Kuma External"
        - "KUMA.uptime-kuma-ext.http.url=https://monitor.${PRIMARY_DOMAIN}"
    volumes:
      - data:/app/data
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      - kuma-net
      - traefik-external

  autokuma:
    # image: ghcr.io/bigboot/autokuma:master
    image: ghcr.io/bigboot/autokuma:uptime-kuma-v2-master
    deploy:
      replicas: 1
      # mode: global
      restart_policy:
        condition: on-failure
        delay: 2s
        max_attempts: 5
      resources:
        limits:
          cpus: "0.5"
          memory: "256M"
        reservations:
          cpus: "0.25"
          memory: "128M"
      labels:
        - "traefik.enable=false"
    depends_on:
      - uptime-kuma        
    env_file: 
      - .env
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - autokuma-data:/data
    networks:
      - kuma-net

networks:
  traefik-external:
    external: true
  kuma-net:
    driver: overlay
    attachable: true  

and so is the Mealie one:

---
version: '3.9'

services:
  mealie:
    image: ghcr.io/mealie-recipes/mealie:v2.7.1 
    deploy:
      replicas: 1
      restart_policy:
        condition: on-failure
        delay: 2s
        max_attempts: 5
      labels:
        # Enable Traefik for this service
        - "traefik.enable=true"
        ## HTTP Routers - internal
        - "traefik.http.routers.rtr-mealie.rule=Host(`recipes.${PRIMARY_DOMAIN}`)"
        - "traefik.http.routers.rtr-mealie.entrypoints=websecure"
        - "traefik.http.routers.rtr-mealie.service=svc-mealie"
        - "traefik.http.routers.rtr-mealie.priority=10"
        ## Middlewares
        - "traefik.http.routers.rtr-mealie.middlewares=global-forwardauth-authentik@file"
        ## HTTP Services 
        - "traefik.http.services.svc-mealie.loadbalancer.server.port=9000"
        ## UPTIME KUMA Labels
        # - "KUMA.group-applications.group.name=Applications"
        - "KUMA.group-mealie.group.parent_name=group-applications"
        - "KUMA.group-mealie.group.name=Mealie"
        - "KUMA.mealie.http.parent_name=group-mealie"
        - "KUMA.mealie.http.name=Mealie Internal"
        - "KUMA.mealie.http.url=http://mealie:9000"
        - "KUMA.mealie-ext.http.parent_name=group-mealie"
        - "KUMA.mealie-ext.http.name=Mealie External"
        - "KUMA.mealie-ext.http.url=https://recipes.${PRIMARY_DOMAIN}"
    volumes:
      - mealie-data:/app/data/
    env_file: 
      - .env
    depends_on:
      - postgres
    networks:
      - mealie-network
      - traefik-external
      - kuma-net

It creates the group (i see it in the UI), but never creates the app. In the UI, it actually keeps creating the same group over and over.

Image

chupacabra71 avatar Mar 15 '25 17:03 chupacabra71

Is there plans to support API keys that can be generated from the Uptime Kuma 2.0 UI rather than username/password?

rv10guy avatar Sep 20 '25 20:09 rv10guy

Is there plans to support API keys that can be generated from the Uptime Kuma 2.0 UI rather than username/password?

There still is no support for API keys on the SocketIO api afaik, the api keys only work for pushing metrics not for managing monitors

BigBoot avatar Oct 25 '25 08:10 BigBoot

hey just went up to the 2.x is it normal for it to repeat logins this much? (these are uptimekuma logs)

2025-11-04T13:58:59+08:00 [AUTH] INFO: WebSocket origin check is bypassed
2025-11-04T13:58:59+08:00 [AUTH] INFO: Disabled Auth: auto login to admin
2025-11-04T13:58:59+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:58:59+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:58:59+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:58:59+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:58:59+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:58:59+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:58:59+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:58:59+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:58:59+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:04+08:00 [SOCKET] INFO: New websocket connection, IP = 172.xx.xx.xx
2025-11-04T13:59:04+08:00 [AUTH] INFO: WebSocket origin check is bypassed
2025-11-04T13:59:04+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:04+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:04+08:00 [AUTH] INFO: Disabled Auth: auto login to admin
2025-11-04T13:59:04+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:04+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:04+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:05+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:05+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:05+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:05+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [SOCKET] INFO: New websocket connection, IP = 172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: WebSocket origin check is bypassed
2025-11-04T13:59:10+08:00 [AUTH] INFO: Disabled Auth: auto login to admin
2025-11-04T13:59:10+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:10+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:10+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:10+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:10+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:15+08:00 [SOCKET] INFO: New websocket connection, IP = 172.xx.xx.xx
2025-11-04T13:59:15+08:00 [AUTH] INFO: WebSocket origin check is bypassed
2025-11-04T13:59:16+08:00 [AUTH] INFO: Disabled Auth: auto login to admin
2025-11-04T13:59:16+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:16+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:16+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:16+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:16+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:16+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx
2025-11-04T13:59:16+08:00 [AUTH] INFO: Login by token. IP=172.xx.xx.xx
2025-11-04T13:59:16+08:00 [AUTH] INFO: Username from JWT: phyzical
2025-11-04T13:59:16+08:00 [AUTH] INFO: Successfully logged in user phyzical. IP=172.xx.xx.xx

This is then intern spamming my mariadb with i think auth events o.0. the dips represent when i stop AutoKuma

Image

phyzical avatar Nov 04 '25 06:11 phyzical

This is expected, you can change the sync interval with the AUTOKUMA__SYNC_INTERVAL/sync_interval (sync interval in seconds) config.

BigBoot avatar Nov 04 '25 06:11 BigBoot

👍 i actually just stumbled across that new var above and can confirm it helps the load issues so i will just throttle going forward.

it also just duplicated all monitors, but im not sure what caused this. if it happens again i will report back as after blatting the db and starting again i only have a single set

phyzical avatar Nov 04 '25 06:11 phyzical

~Hmm is there some sort of statefile or something now that im not mapping? it seems when i do something that forces the container to be recreated it also recreates all monitors, but a simple reboot of the existing autokuma container doesn't cause this~

edit: ah yes opps i see theres a new volume /data (or maybe existing and i never noticed)

edit: Yep that was it all good 👍

phyzical avatar Nov 04 '25 06:11 phyzical

I get the following error after upgrading uptime-kuma to v2 (on latest and uptime-kuma-v2-master):

thread 'tokio-runtime-worker' panicked at /usr/src/autokuma/kuma-client/src/client.rs:249:74:

called `Result::unwrap()` on an `Err` value: Error("unexpected trailing characters; the end of input was expected", line: 0, column: 0)

WARN [kuma_client::client] Timeout while waiting for Kuma to get ready...

WARN [autokuma::sync] Encountered error during sync: It looks like the server is expecting a username/password, but none was provided

Authentication is disabled in uptime-kuma.

radokristof avatar Nov 04 '25 22:11 radokristof

With latest changes on master, I get the same output:

uptimekuma_autokuma  | thread 'tokio-runtime-worker' panicked at /usr/src/autokuma/kuma-client/src/client.rs:249:74:
uptimekuma_autokuma  | called `Result::unwrap()` on an `Err` value: Error("unexpected trailing characters; the end of input was expected", line: 0, column: 0)
uptimekuma_autokuma  | stack backtrace:
uptimekuma_autokuma  |    0: __rustc::rust_begin_unwind
uptimekuma_autokuma  |    1: core::panicking::panic_fmt
uptimekuma_autokuma  |    2: core::result::unwrap_failed
uptimekuma_autokuma  |    3: kuma_client::client::Worker::on_event::{{closure}}
uptimekuma_autokuma  |    4: kuma_client::client::Worker::connect::{{closure}}::{{closure}}::{{closure}}::{{closure}}
uptimekuma_autokuma  |    5: tokio::runtime::task::core::Core<T,S>::poll
uptimekuma_autokuma  |    6: tokio::runtime::task::harness::Harness<T,S>::poll
uptimekuma_autokuma  |    7: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
uptimekuma_autokuma  |    8: tokio::runtime::context::scoped::Scoped<T>::set
uptimekuma_autokuma  |    9: tokio::runtime::context::runtime::enter_runtime
uptimekuma_autokuma  |   10: tokio::runtime::scheduler::multi_thread::worker::run
uptimekuma_autokuma  |   11: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
uptimekuma_autokuma  |   12: tokio::runtime::task::core::Core<T,S>::poll
uptimekuma_autokuma  |   13: tokio::runtime::task::harness::Harness<T,S>::poll
uptimekuma_autokuma  |   14: tokio::runtime::blocking::pool::Inner::run
uptimekuma_autokuma  | note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

radokristof avatar Nov 05 '25 06:11 radokristof

@radokristof Looks like autokuma is unable to parse your list of maintenances I'm not using that feature extensively, but it seems to work here with a bit of testing.

FWIW please use the master tag with v2 going forward, since It now contains v2 support by default.

Can you please run with RUST_LOG=kuma_client::client=trace and look for a line starting with Client::on_any(Custom("maintenanceList") and post it here (Make sure to redact any sensitive information).

BigBoot avatar Nov 06 '25 06:11 BigBoot

@BigBoot thank you for your help!

Relevant log lines:

2025-11-06 06:14:45.740 [kuma_client::client] TRACE: Client::on_any(Custom("maintenanceList"), Text([Object {"1": Object {"active": Bool(true), "cron": String(""), "dateRange": Array [String("2025-09-09 16:30:00"), String("2025-09-09 17:00:00")], "daysOfMonth": Array [], "description": String(""), "duration": Null, "durationMinutes": Number(0), "id": Number(1), "intervalDay": Number(1), "status": String("ended"), "strategy": String("single"), "timeRange": Array [Object {"hours": Number(0), "minutes": Number(0)}, Object {"hours": Number(0), "minutes": Number(0)}], "timeslotList": Array [Object {"endDate": String("2025-09-09 17:00:00"), "startDate": String("2025-09-09 16:30:00")}], "timezone": String("Europe/Budapest"), "timezoneOffset": String("+01:00"), "timezoneOption": String("Europe/Budapest"), "title": String("Migrating Marinero Survey"), "weekdays": Array []}, "2": Object {"active": Bool(true), "cron": String(""), "dateRange": Array [String("2025-10-05 23:35:00"), String("2025-10-06 00:10:00")], "daysOfMonth": Array [], "description": String(""), "duration": Null, "durationMinutes": Number(0), "id": Number(2), "intervalDay": Number(1), "status": String("ended"), "strategy": String("single"), "timeRange": Array [Object {"hours": Number(0), "minutes": Number(0)}, Object {"hours": Number(0), "minutes": Number(0)}], "timeslotList": Array [Object {"endDate": String("2025-10-06 00:10:00"), "startDate": String("2025-10-05 23:35:00")}], "timezone": String("UTC"), "timezoneOffset": String("+00:00"), "timezoneOption": String("UTC"), "title": String("Marina Manager Update"), "weekdays": Array []}, "3": Object {"active": Bool(true), "cron": String(""), "dateRange": Array [String("2025-10-08 16:40:00"), String("2025-10-08 17:00:00")], "daysOfMonth": Array [], "description": String(""), "duration": Null, "durationMinutes": Number(0), "id": Number(3), "intervalDay": Number(1), "status": String("ended"), "strategy": String("single"), "timeRange": Array [Object {"hours": Number(0), "minutes": Number(0)}, Object {"hours": Number(0), "minutes": Number(0)}], "timeslotList": Array [Object {"endDate": String("2025-10-08 17:00:00"), "startDate": String("2025-10-08 16:40:00")}], "timezone": String("UTC"), "timezoneOffset": String("+00:00"), "timezoneOption": String("UTC"), "title": String("Metabase Maintenance"), "weekdays": Array []}}]))

This is on current master

Edit: just for the sake of testing, after deleting the maintenance list, it started working so it is definitely something with the maintenance list...

radokristof avatar Nov 06 '25 06:11 radokristof

Issue should be fixed on the current master

BigBoot avatar Nov 06 '25 17:11 BigBoot

AutoKuma v2 now supports Uptime Kuma v2 by default, as such I'm closing this issue

BigBoot avatar Nov 12 '25 17:11 BigBoot