Daniel Basilio

Results 15 comments of Daniel Basilio

@viceice We've tried a timeout of 120s, and a concurrency limit of 2 individually with no change. Going to try both together

Both settings together don't seem to help, we're still getting the external-host-error more often than note. Anecdotally, it does seem to fail on "later" packages alphabetically ```json { "hostType": "npm",...

Since it's been running for a few days now with ```json { "hostType": "npm", "matchHost": "https://registry.npmjs.org/", "timeout": 120000, "concurrentRequestLimit": 2 } ``` I've noticed that it started hitting a timeout...

Updated my hostrule to: ```json { "matchHost": "registry.npmjs.org/", "concurrentRequestLimit": 2, "timeout": 120000 , ``` (and then the same for our jfrog registry as well in a separate host rule. We're...

No change ![image](https://user-images.githubusercontent.com/8311284/189938693-10d2cc7e-142e-448f-a9b4-150abe4bf103.png)

External Host Error ```json { "hostType": "npm", "packageName": "@datadog/browser-logs", "err": { "name": "TimeoutError", "code": "ETIMEDOUT", "timings": { "start": 1663081515075, "socket": 1663081515076, "lookup": 1663081519436, "connect": 1663081519436, "secureConnect": 1663081522929, "upload": 1663081522931, "response":...

We do not have a timeout configured anywhere in our config

These are the logs for the last day, with no change to our config: ![image](https://user-images.githubusercontent.com/8311284/190244037-1ac50356-8f0e-4220-8f36-eb050d2dc0a9.png) The errors were respectively: - Fetching @datadog/browser-logs - Timeout awaiting 'request' for 60000ms - Fetching...

We've had it running all weekend with a 4m timeout and this is what it looks like ![image](https://user-images.githubusercontent.com/8311284/191047295-cb7d3e7f-90bf-45f3-8d2d-25b35d5a1dfd.png) The failures were (in order): - @types/svg-sprite-loader - Client network socket disconnected...

We've been running with the 4m change for a week now, we're still seeing intermittent failures but it's now completing often enough that we're getting PRs through so most of...