v2
v2 copied to clipboard
This website is temporarily unreachable
Refreshing some feeds fails with the error This website is temporarily unreachable (original error: "dial tcp 178.248.237.68:443: i/o timeout")
.
rss url https://habr.com/ru/rss/best/daily/?fl=ru
The problem with this url appears only on heroku hosting and does not occur when miniflux is run on a dedicated host or local machine.
Heroku logs are not descriptive enough to understand whats going on
2021-11-23T09:48:53.417290+00:00 app[web.1]: [ERROR] [Worker] Refreshing the feed #14 returned this error: This website is temporarily unreachable (original error: "dial tcp 178.248.237.68:443: i/o timeout")
The problem may be related to some kind of internal heroku routing, but I do not know how to find out what is actually going on with this http request.
I see a lot of those in self-hosting when It refreshes YouTube video feeds. Very very often
The timeout error suggests that this has something to do with the fact that Heroku apps need to "wake up" when they haven't been accessed in a while. In my experience, this could take up to 10s which is probably longer than the maximum wait time that the refresh allows.
To fix this error (for Youtube channels) I have to restart miniflux and it works fine. It has nothing to do on the Youtube side
I use local hosting
Actually, I was wrong. The issue is still there after a restart. I cannot find a pattern. I also tried increasing the HTTP_CLIENT_TIMEOUT to 200 but that does not seem to help either.
Also, I am using Docker (if this is relevant)
New observation. If I selected the feed -> edit -> press update. It completes fine.
Refresh on the other hand will not work (at the exact same moment)
Also getting these errors (always on youtube feeds)
dial tcp [some address here]:443: connect: cannot assign requested address
and have to manually restart the docker service
Anyway. This is definitely a bug.
After the "dial tcp [some address here]:443: connect: cannot assign requested address" error to start working again for Youtube feeds, I have to stop the service (both miniflux and database) and start again. It then works
Same here. I'm using docker too.
Possibly related #1380
I've started seeing timeouts on my miniflux server too. I'm running on miniflux 2.0.37. If I manually request a feed update sometimes it works.. It almost never works with automation. I have the same problem with both the feeds I follow. I didn't see this issue when i first setup the service.
I've noticed that if I edit a feed it will update as well. Its just the refresh button or auto refresh that breaks.
I run 2.0.39 in docker and I see those timeouts constantly. miniflux -reset-feed-errors
and container restart sometimes helps. I attached an alpine container to the same process and network namespace to test feed URL with curl, and it works fine.
I took a look at code. At first glance: making net.Dialer
Timeout configurable would probably help: https://github.com/miniflux/v2/blob/main/http/client/client.go#L279
Another thing that should improve performance and possibly help to fix this issue: do not create http.Client
, transport and dialer for each request. Currently, TCP connections are not reused at all.
I'll look into writing a proper patch when I have some time. For now, I compiled Miniflux with following patch applied, and it works great for my local installation. Not recommended for production use :wink:
diff --git i/http/client/client.go w/http/client/client.go
index cb9669..0afe56 100644
--- i/http/client/client.go
+++ w/http/client/client.go
@@ -311,7 +311,7 @@ func (c *Client) buildClient() http.Client {
client.Transport = transport
- return client
+ return *http.DefaultClient
}
func (c *Client) buildHeaders() http.Header {
Before I open a new bug report, I'm getting exactly the same error in a very specific setup:
- miniflux is running (successfully) in a docker container, connected to a tailscale network (also a docker container fwiw), retrieving feeds from an rsshub deployment in still another docker container, all on the same server. All 3 containers are on the same docker network.
- In this setup, miniflux is working fine, except that I consistently, 100% of the time, get this error for any of the rsshub feeds. I have tried the public DNS name I'm using for convenience, the tailscale-assigned hostname (+port), and the tailscale private IP (+port), and it fails in each case.
- Note that in a previous setup NOT on a tailscale network, everything worked fine. Put another way, I have a nearly 100% success rate with any feed other than the rsshub-over-tailscale feeds (including the previous non-tailscale setup) and an exactly 100% failure rate with any feed running through rsshub-over-tailscale. Either it doesn't like connecting to another container on a VPS on a tailscale network or something about connecting to youtube is causing this whole thing to fall apart.
For completeness:
- the tailscale docker image is connecting to and registering with the tailscale network and is showing up as its own client in tailscale.
- as noted, all of the other containers are connected to each other and to the world via the tailscale container's network and do not have their own client entries in tailscale.
- weirdly, if I try to use the legacy (and publicly-accessible) rsshub instance I set up, I get another authentication error that I haven't even begun to troubleshoot.
PS. Same behavior with rss-bridge as the feed provider.
This was probably unique to me but I found that my issue was DNS. I had a some misconfiguration on my firewall because my egress source IPs were not stable and I had missed part of the range for outbound DNS requests.
Can we at least implement a retry mechanism with exponential backoff? This issue seems to happen quite randomly and updating the feeds manually works most of the time.
I am on Version: 2.0.49. I see this error frequently too with many feeds. When I click on refresh, it works.
1 error - This website is unreachable (original error: "dial tcp: lookup aeon.co: i/o timeout")
https://habr.com/ru/rss/best/daily/?fl=ru
works fine for me.
Closing this old issue.