nvm
nvm copied to clipboard
NodeJS installation giving "curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)" when running from fixed IP address multiple times
Operating system and version:
Multiple AWS Services (like AWS Device Farm and AWS Code Deploy) which use hosts that exist in a highly-limited IP space
What steps did you perform?
nvm install some_version_number
What happened?
In a small percentage of cases, the download of Node from nodejs.org slows to a crawl before failing. The AWS service teams believe the root cause to be throttling from nodejs.org against the highly-limited IP space used by the hosts running the NVM installation commands.
Exact output:
nvm install 14.17.1
...
############ 17.8%
############## 19.5%curl: (92) HTTP/2 stream 0 was not closed cleanly: INTERNAL_ERROR (err 2)
What did you expect to happen?
nvm install should install the version of node. A reasonable mitigation for this type of problem would be to have the download command (https://github.com/nvm-sh/nvm/blob/master/nvm.sh#L124) use exponential backoffs (https://everything.curl.dev/usingcurl/downloads/retry). Can this be made into a feature-flag (or default behavior) of NVM?
I don’t think that’s worth adding into nvm itself, considering you can run nvm install yourself inside such a loop. It’s very rare to have network issues, and it usually coincides with releases going out.
If this is a common problem with a specific vendor, then AWS (or yourself) could easily set up its own nodejs.org mirror, and use the env var to point nvm at it.
I also ran into this issue from within official azure pipeline service runners.