glance
glance copied to clipboard
Glance timing out - monitor and weather
Just installed glance and I love it - so simple, fast, and gorgeous - so thanks in advance!
I've installed via docker on a rpi5, which is also running a few other containers (HA, mostly). I've reverse proxied my services with caddy to be available locally with a local TLD.
On first install the weather widget times out with the following error:
failed to retrieve any content: Get "https://api.open-meteo.com/v1/forecast?current=temperature_2m%2Capparent_temperature%2Cweather_code&daily=sunrise%2Csunset&forecast_days=1&hourly=temperature_2m%2Cprecipitation_probability&latitude=-43.533330&longitude=172.633330&temperature_unit=celsius&timeformat=unixtime&timezone=Pacific%2FAuckland": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
And on adding a monitor widget for a few services those time out, too. I saw your comment in another thread about monitor to compare ping times in and outside of the container and get 8ms outside of the container and 4-8s in the container, so wondered if you had a way to adjust the timeout in glance (if this is where the error is coming from) or some tips and guidance on where to go looking for this problem?
Hey, thanks for reporting this! In all honesty I still have no idea why some people are seeing vastly degraded network performance from within their Docker containers. Glance's image is as basic as it gets so I like to think it's not the reason by being misconfigured in any way.
As for the timeout, there's currently no way of changing it, it's something I want to add but I'm not sure what the best way of implementing it is yet.
I've added a new diagnose command in v0.7.0 that will hopefully help with pinpointing a cause for network related issues, though there are no official builds out yet. It might be a lot to ask, it's fine if you don't want to, though could you possibly build v0.7.0 locally on the server you're experiencing this and run the diagnose command, then provide the output? Here are the steps, all you need is git and docker installed:
# clone the repo's release/v0.7.0 branch
git clone -b release/v0.7.0 https://github.com/glanceapp/glance.git
cd glance
# build Glance
docker build -t local/glance -f Dockerfile .
# run the diagnose command and then remove the container, please provide the output of this command
docker run --rm local/glance diagnose
This should give us some clues about whether it's bandwidth related, DNS, or limited to specific connections.
Of course, happy to help! Here's the output:
Glance version: dev
Go version: go1.23.1
Platform: linux / arm64 / 4 CPUs
In Docker container: yes
Checking network connectivity, this may take up to 10 seconds...
✓ Can resolve cloudflare.com through Cloudflare DoH | 261 bytes, {"Status":0,"TC":false,"RD":true,"RA":true,"AD":tr... | 34ms
✓ Can resolve cloudflare.com through Google DoH | 260 bytes, {"Status":0,"TC":false,"RD":true,"RA":true,"AD":tr... | 219ms
✓ Can resolve github.com | 4.237.22.38 | 5053ms
✓ Can resolve reddit.com | 151.101.65.140, 151.101.1.140, 151.101.193.140, 151.101.129.140, 2a04:4e42:200::396, 2a04:4e42:600::396, 2a04:4e42:400::396, 2a04:4e42::396 | 5052ms
✓ Can resolve twitch.tv | 151.101.2.167, 151.101.194.167, 151.101.130.167, 151.101.66.167 | 5053ms
✓ Can fetch data from YouTube RSS feed | 34770 bytes, <?xml version="1.0" encoding="UTF-8"?><feed xmlns:... | 6119ms
✓ Can fetch data from Twitch.tv GQL | 0 bytes | 5227ms
✓ Can fetch data from GitHub API | 2262 bytes, {"current_user_url":"https://api.github.com/user",... | 5405ms
✓ Can fetch data from Open-Meteo API | 3328 bytes, {"results":[{"id":2643743,"name":"London","latitud... | 6086ms
✓ Can fetch data from Reddit API | 134 bytes, {"kind": "Listing", "data": {"modhash": "", "dist"... | 5283ms
✓ Can fetch data from Yahoo finance API | 36764 bytes, {"chart":{"result":[{"meta":{"currency":"USD","sym... | 5285ms
✓ Can fetch data from Hacker News Firebase API | 4501 bytes, [42531217,42530332,42531695,42532157,42530991,4253... | 5955ms
✓ Can fetch data from Docker Hub API | 4183 bytes, {"creator":7,"id":2343,"images":[{"architecture":"... | 5729ms
I've also just checked how long wget takes comparing the direct IP and port and the local TLD I have set up through pihole and caddy - instantaneous for IP, 4s for TLD. This might be where the monitor issue is coming from, but presumably not for the weather api? The API call does return successfully in the container, taking 5.53 seconds.
Thanks for providing the diagnostic info!
Based on the fact that the first two requests took 34ms and 219ms and didn't involve a DNS lookup, then the subsequent 3 requests were solely a DNS lookup and took >5s, I think we can narrow this down to being a DNS issue. We can confirm that by running the same diagnose command but using Google's DNS:
docker run --dns 8.8.8.8 --rm local/glance diagnose
This should result in all requests completing in <1s. Another thing to try is timing nslookup on your server (not inside a Docker container):
time nslookup google.com
time nslookup youtube.com
time nslookup reddit.com
If these also ends up being slow, then there's likely something wrong with your Pi-hole's configuration. If they complete quickly, then it's likely an issue around Docker's network communicating with Pi-hole. If Pi-hole is on another server, could you also try pinging that server to ensure it's not local network related?
Thanks for your help!
Your instinct looks right - that command with google's DNS ran much quicker:
Glance version: dev
Go version: go1.23.1
Platform: linux / arm64 / 4 CPUs
In Docker container: yes
Checking network connectivity, this may take up to 10 seconds...
✓ Can resolve cloudflare.com through Cloudflare DoH | 259 bytes, {"Status":0,"TC":false,"RD":true,"RA":true,"AD":tr... | 34ms
✓ Can resolve cloudflare.com through Google DoH | 304 bytes, {"Status":0,"TC":false,"RD":true,"RA":true,"AD":tr... | 224ms
✓ Can resolve github.com | 4.237.22.38 | 46ms
✓ Can resolve reddit.com | 151.101.1.140, 151.101.65.140, 151.101.193.140, 151.101.129.140, 2a04:4e42:600::396, 2a04:4e42:200::396, 2a04:4e42:400::396, 2a04:4e42::396 | 41ms
✓ Can resolve twitch.tv | 151.101.2.167, 151.101.130.167, 151.101.66.167, 151.101.194.167 | 43ms
✓ Can fetch data from YouTube RSS feed | 34770 bytes, <?xml version="1.0" encoding="UTF-8"?><feed xmlns:... | 1083ms
✓ Can fetch data from Twitch.tv GQL | 0 bytes | 220ms
✓ Can fetch data from GitHub API | 2262 bytes, {"current_user_url":"https://api.github.com/user",... | 398ms
✓ Can fetch data from Open-Meteo API | 3329 bytes, {"results":[{"id":2643743,"name":"London","latitud... | 945ms
✓ Can fetch data from Reddit API | 134 bytes, {"kind": "Listing", "data": {"modhash": "", "dist"... | 284ms
✓ Can fetch data from Yahoo finance API | 36764 bytes, {"chart":{"result":[{"meta":{"currency":"USD","sym... | 283ms
✓ Can fetch data from Hacker News Firebase API | 4501 bytes, [42542138,42542100,42537631,42540442,42533685,4253... | 548ms
✓ Can fetch data from Docker Hub API | 4177 bytes, {"creator":7,"id":2343,"images":[{"architecture":"... | 719ms
nslookups on the server are pretty instantaneous, so I'd guess it's an issue with docker's network communicating with pi-hole. Pi-hole is running bare metal on the same server as the glance container. Do you have an idea of where to look to diagnose this?
Unfortunately this is about as far as my network debugging abilities go. You could try pinging between two random containers to see if it's a widespread Docker network issue (presumably not) or limited to containers trying to connect to Pi-hole, in which case you can start poking around its configuration for anything odd.
While not ideal, as a workaround you could run Glance with --network=host which I think should resolve the issue.
@jenkshields,
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
This fixed my issues as well, thank you!
Hey ! Just wanted to validate the fix with AdGuard Home. Having the default rate limit times out random requests made by glance when a lot of services are monitored. Setting it to a higher value (or 0 for unlimited) fixes the problem.
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
Thank you so much for the fix. Inside the Adguard DNS Settings > DNS server configuration > Rate limit to 0. And voilà !
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
Isso funcionou perfeitamente pra mim! Muuuuito obrigado, cara!
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
thank you homie, you fixed my issue... <3
Don't use proxies like any of the above and still get random timeouts
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
As an alternative, on your actual DNS controller (if it's not the AdGuard instance), you can also simply bypass the AdGuard DNS and fallback to your typical routing.
Doing this resolved my issues with the weather widget.
I removed teh rate limit in adguard home and this worked for a while, but recently it broke again. Any other solutions found for this? I see this is not the only issue calling out some strange behavior with the networking Glance
For what it's worth, I've had a similar problem with other containers as well. I'm using Adguard as my DNS server and turns out the default settings had a rate limit of 20 requests per second (per client). I have one VM with many containers running (some of which make a lot of simultaneous requests).
Once I removed the rate limit, things are working correctly. Might be worth checking if you have some client rate limiting on your pi-hole.
Want to confirm that this works as well.
This should be closed with #317 and have some sort of mention in the docs for this so people in the future don't get this error.
@RestartDK It has been in the README under "Common issues" for quite a while now.