gcd-parser
gcd-parser copied to clipboard
530 Server Error
trafficstars
./get_updates.py -W 3291
Device 3291 (guessed): fenix 6X Pro
Querying Garmin WebUpdater ...Traceback (most recent call last):
File "./get_updates.py", line 108, in <module>
results += us.query_webupdater(device_skus)
File "/usr/src/GARMIN/gcd-parser/grmn/updateserver.py", line 132, in query_webupdater
reply = self.get_webupdater_softwareupdate(requests_xml)
File "/usr/src/GARMIN/gcd-parser/grmn/updateserver.py", line 290, in get_webupdater_softwareupdate
r.raise_for_status()
File "/usr/lib/python3/dist-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 530 Server Error: for url: https://www.garmin.com/support/WUSoftwareUpdate.jsp
I think this https://github.com/petergardfjall/garminexport/issues/79
Hmm ... the returned HTTP status code is 530, but the (custom) webpage returned says 500. However, looks like a Cloudflare/Garmin issue. Unless they've decided to retire WebUpdater or move it to a different URL. Which I doubt.
Hmm ...
diff --git a/grmn/updateserver.py b/grmn/updateserver.py
index 0827263..8fc614c 100644
--- a/grmn/updateserver.py
+++ b/grmn/updateserver.py
@@ -10,6 +10,7 @@ from .proto import GetAllUnitSoftwareUpdates_pb2
from xml.dom.minidom import getDOMImplementation, parseString
from urllib.parse import unquote
import requests
+import cloudscraper
PROTO_API_GETALLUNITSOFTWAREUPDATES_URL = "http://omt.garmin.com/Rce/ProtobufApi/SoftwareUpdateService/GetAllUnitSoftwareUpdates"
WEBUPDATER_SOFTWAREUPDATE_URL = "https://www.garmin.com/support/WUSoftwareUpdate.jsp"
@@ -284,7 +285,8 @@ class UpdateServer:
"req": requests_xml,
}
- r = requests.post(WEBUPDATER_SOFTWAREUPDATE_URL, headers=headers, data=data)
+ scraper = cloudscraper.create_scraper()
+ r = scraper.post(WEBUPDATER_SOFTWAREUPDATE_URL, headers=headers, data=data)
if r.status_code != 200:
r.raise_for_status()
# pip install cloudscraper ... blah-blah-blah ... )))
Yeah, I was somehow hoping to be able to avoid cloudscraper...
Looks like this was an issue with their servers. It's working fine again.