Log Watcher: Server Connect Error: Error: websocket error (https://XXX.XX.XX.XX:3013)
Summary
I have set-up a custom base_app_url (e.g. https://cronicle.mydomain.com) but still when I run jobs from cronicle UI I get the following Log Watcher: Server Connect Error: Error: websocket error (https://XXX.XX.XX.XX:3013) error. The error shows an IP address instead of what is configured under my config variable base_app_url.
Steps to reproduce the problem
- Update
base_app_urlwith a custom domain - Start cronicle using
/opt/cronicle/bin/control.sh - Run a job and see the Log Watcher error out.
Your Setup
Server Environment:
- Domain: cronicle.mydomain.com
- Server IP: XXX.XXX.X.XXX
- Firewall: Configured to allow HTTP (port 80) and HTTPS (port 443).
Nginx Configuration:
- Proxying: Nginx proxies traffic to Cronicle on port 3012 with WebSocket support.
- HTTPS: Let’s Encrypt SSL certificates are used for cronicle.mydomain.com, redirecting all HTTP traffic to HTTPS.
Cronicle Configuration:
- base_app_url: Set to https://cronicle.mydomain.com, ensuring URLs in the app and notifications are correct.
- WebSocket and Direct Connect: Configured to use WebSocket transport and indirect connections (web_direct_connect: false) for compatibility with the Nginx reverse proxy.
- Storage: Using default filesystem storage for Cronicle data, with logs and data directories under /opt/cronicle.
Operating system and version?
Ubuntu 24.04 LTS
Node.js version?
v22.4.1
Cronicle software version?
Version 0.9.61
Are you using a multi-server setup, or just a single server?
Single server
Are you using the filesystem as back-end storage, or S3/Couchbase?
filesystem
Can you reproduce the crash consistently?
Yes
Log Excerpts
Log Watcher: Server Connect Error: Error: websocket error (https://XXX.XX.XX.XX:3013)
So sorry about this. Live log watching requires the user's browser to make a direct WebSocket connection to the worker server that is running the job (not the master server).
This doesn't work well for some people with complex network topographies, i.e. situations where the worker servers aren't directly accessible by your users' client machines.
Check out Mike's Cronicle fork over at: https://github.com/cronicle-edge/cronicle-edge
His implementation uses the master server as a proxy for log watching, getting around this problem entirely.
This is also solved in Cronicle v2, coming out in 2025.
@uk94 exactly the same configuration and the same error here. How did you solve it?
I see.
so we don't have the ability to specify a port for that worker to run on?
we could then have the HTTP server serve that under something like https://worker.cronicle.mydomain.com.
@utiq the solution seems to be using Mike's fork for the time being.
@HolgerNetgrade I fixed it, actually it was an easy fix, it was not necessity to use any other fork. I just added "custom_live_log_socket_url": "https://mydomainname.com" (notice without the port) to the config file and opened the port 3013 in the AWS security manager
Can confirm that the comment from @utiq is working. I got the same error as described in the original issue.
I have configured the base_app_url in the config to http://cron.example.com and have this DNS configured in Nginx Proxy
Manager. With this setup I could access the webui, but was unable to retrieve logs as part of a running job.
By adding custom_live_log_socket_url: https://cron.example.com to the config, I can now see logs as part of a running job.
"custom_live_log_socket_url": "https://mydomain.com" without the port like @utiq said is working for me too. Needed to open additional the port at my hardware firewall in adition to plesk firewall