workers-sdk
workers-sdk copied to clipboard
🐛 BUG: recently added wrangler retries doesn't seem to help with flakiness
Which Cloudflare product(s) does this pertain to?
Wrangler
What version(s) of the tool(s) are you using?
3.79.0 [Wrangler]
What version of Node are you using?
20.16.0
What operating system and version are you using?
macOS Sequoia 15.0 (24A335) & Ubuntu Jammy (22.04.5 LTS) on CI
Describe the Bug
Observed behavior
We had retries on CI level for wrangler-related tasks for a while but we recently noticed that "native" retries were added in v3.79.0 with https://github.com/cloudflare/workers-sdk/pull/6801. We jumped in, upgraded to the supported version and removed our CI retry-logic only to find out that our CI is again flaky due to wrangler-related errors.
We reverted back to our CI retry logic but it's not ideal. We would welcome this to work as it will simplify our configuration.
Expected behavior
Flakiness will be low with "native"/built-in retries. Ideally we won't get random failures related to wrangler at all if everything is correct on our side.
It would be also great to have some logs for the retries. Right now it's hard to guess if the logic actually does anything. On our side it just looks like it ran once and failed immediately.
Steps to reproduce
N/A
It's random error that happens semi-regularly on our CI if we don't do retries on our side.
Please provide a link to a minimal reproduction
No response
Please provide any relevant error logs
> wrangler deploy
⛅️ wrangler 3.79.0
-------------------
Total Upload: 296.15 KiB / gzip: 70.73 KiB
Your worker has access to the following bindings:
- Vars:
- ENVIRONMENT: "dev"
✘ [ERROR] A request to the Cloudflare API (/accounts/xxxyyyzzz/workers/scripts/aaabbbccc/deployments) failed.
workers.api.error.unknown [code: 10013]