Failed to report properly when notification encountered network timeout
I use telegram for notification. I ran urlwatch. I got the output below on my terminal but I did not receive anything on my telegram. Running urlwatch again did not report any changes (no notification too).
$ urlwatch
===========================================================================
01. CHANGED: Test1
===========================================================================
---------------------------------------------------------------------------
CHANGED: Test1 (http://test.test)
---------------------------------------------------------------------------
--- @ Sun, 11 Aug 2019 17:12:37 +0800
+++ @ Sun, 11 Aug 2019 18:48:07 +0800
@@ -1,4 +1,4 @@
-## Old stuff
+## New Stuffs
---------------------------------------------------------------------------
--
urlwatch 2.17, Copyright 2008-2019 Thomas Perl
Website: https://thp.io/2008/urlwatch/
watched 6 URLs in 18 seconds
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3.7/site-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/lib/python3.7/site-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
OSError: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 344, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 843, in _validate_conn
conn.connect()
File "/usr/lib/python3.7/site-packages/urllib3/connection.py", line 316, in connect
conn = self._new_conn()
File "/usr/lib/python3.7/site-packages/urllib3/connection.py", line 169, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7122b0d0>: Failed to establish a new connection: [Errno 101] Network is unreachable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /*BOTTOKEN*/sendMessage (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7122b0d0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/urlwatch", line 111, in <module>
urlwatch_command.run()
File "/usr/lib/python3.7/site-packages/urlwatch/command.py", line 295, in run
self.urlwatcher.close()
File "/usr/lib/python3.7/site-packages/urlwatch/main.py", line 99, in close
self.report.finish()
File "/usr/lib/python3.7/site-packages/urlwatch/handler.py", line 159, in finish
ReporterBase.submit_all(self, self.job_states, duration)
File "/usr/lib/python3.7/site-packages/urlwatch/reporters.py", line 103, in submit_all
subclass(report, cfg, job_states, duration).submit()
File "/usr/lib/python3.7/site-packages/urlwatch/reporters.py", line 553, in submit
res = self.submitToTelegram(bot_token, chat_id, chunk)
File "/usr/lib/python3.7/site-packages/urlwatch/reporters.py", line 563, in submitToTelegram
data={"chat_id": chat_id, "text": text, "disable_web_page_preview": "true"})
File "/usr/lib/python3.7/site-packages/requests/api.py", line 116, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.telegram.org', port=443): Max retries exceeded with url: /*BOTTOKEN*/sendMessage (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7122b0d0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
If there are routing issues / network issues when urlwatch runs, of course you cannot get any notification. Weird that http://test.test works, but I guess that was localhost, so unaffected by network connectivity issues?
If there are routing issues / network issues when urlwatch runs, of course you cannot get any notification.
Then that would be unreliable. I would lose reports if only the notification fails. Can you make it so that when a notification fails, the changes detected (if any) would be discarded and reverted back to previous values. That way if I ran urlwatch again successfuly, I would not lose any report.
If there are routing issues / network issues when urlwatch runs, of course you cannot get any notification.
Then that would be unreliable. I would lose reports if only the notification fails. Can you make it so that when a notification fails, the changes detected (if any) would be discarded and reverted back to previous values. That way if I ran urlwatch again successfuly, I would not lose any report.
Makes sense, but what if you have multiple reporters (as you might have -- with the stdout reporter), does it save it if at least one fails or if all of them fail?
I agree that "losing" notifications is bad. Wonder if the reporters themselves should queue up messages?
I personally prefer to use only one reporter (stdout excluded). If I were to use multiple reporters, I expect them to be in sync. One reporter having less reports than the other would just create more problems for me. But that's just me. Other people might have a different use case.
I agree that "losing" notifications is bad. Wonder if the reporters themselves should queue up messages?
I guess. Instead of just discarding the changes, they will be sent out at the next successful notification. This should work even if at least one reporter failed. If using multiple reporters, the timing of notification can slightly differ but the reports should be the same.