[bug] V6 Upload progress stalled behind proxy while V5 works
What happened?
V6 breaks uploading behind proxy on self-hosted runner after reverting to V5, it works
What did you expect to happen?
V6 shall use proxy settings
How can we reproduce it?
V6 https://github.com/Tencent/ncnn/actions/runs/20455804174/job/58778392875 reverted to V5 https://github.com/Tencent/ncnn/actions/runs/20456414421/job/58779448653
Anything else we need to know?
No response
What version of the action are you using?
2.330.0
What are your runner environments?
self-hosted
Are you on GitHub Enterprise Server? If so, what version?
No response
Second this, Also using Self Hosted Runners. However I got a different type of error:
Run actions/upload-artifact@v6 With the provided path, there will be X files uploaded Artifact name is valid! Root directory input is valid! Beginning upload of artifact content to blob storage Error: Proxy connection ended before receiving CONNECT response
Thirded, but again, a slightly different error message, this time not necessarily related to proxy - or maybe it is, it's not possible to tell what we're doing incorrectly since it works with v5:
With the provided path, there will be 200 files uploaded Artifact name is valid! Root directory input is valid! Beginning upload of artifact content to blob storage Error: Unable to make request: ECONNRESET If you are using self-hosted runners, please make sure your runner has access to all GitHub endpoints: https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#communication-between-self-hosted-runners-and-github
We get weird HTTP 400 issues (also with the most recent version of the cache action). I made a tcpdump with v5 and v6 and compared them.
What I found out so far:
v5 does make proper CONNECT requests/handshake to the proxy before sending data:
CONNECT results-receiver.actions.githubusercontent.com:443 HTTP/1.1
Host: results-receiver.actions.githubusercontent.com:443
User-Agent: VSServices/2.330.0.0 (NetStandard; Ubuntu 24.04.3 LTS) ...
Connection: close
CONNECT productionresultssa0.blob.core.windows.net:443 HTTP/1.1
host: productionresultssa0.blob.core.windows.net:443
Connection: close
With v6, there is not a single CONNECT request. v6 tries to send data directly to the proxy without establishing CONNECT tunnels first, which is, from my point of view, why the requests fail.
Even if I explicitly use node version 24 and set the ENV var NODE_USE_ENV_PROXY: 1, there is no single CONNECT request in my tcpdump.
Investigating
I created a test to investigate this behaviour: https://github.com/actions/upload-artifact/pull/754
I was not able to reproduce the problem you're referring to.
Feel free to inspect the test, and try to run it in your own environment. Please reopen with more evidence or concrete reproduction steps.
Breakdown of the squid proxy logs
| Request | Purpose |
|---|---|
archive.ubuntu.com (GET) |
apt-get install downloading curl and iptables dependencies |
github.com (301 → CONNECT) |
Redirect from HTTP to HTTPS, then tunnel established |
api.github.com:443 |
GitHub API calls during checkout |
codeload.github.com:443 |
Repository code download during actions/checkout |
The important part (artifact upload):
results-receiver.actions.githubusercontent.com:443 ← Actions artifact API
productionresultssa15.blob.core.windows.net:443 ← Azure Blob Storage (twice - metadata + upload)
results-receiver.actions.githubusercontent.com:443 ← Finalization call
What this demonstrates
-
All upload-artifact traffic went through the proxy - The connections to
results-receiver.actions.githubusercontent.com(artifact API) andproductionresultssa15.blob.core.windows.net(Azure storage where artifacts are stored) appear in the proxy log -
The test is valid - Since iptables blocked direct HTTPS traffic, these connections would have failed if the action tried to bypass the proxy
-
TCP_TUNNEL/200 - Indicates successful HTTPS tunnel establishment through the proxy (CONNECT method)
Conclusion: The upload-artifact action properly respects the https_proxy environment variable for all its network operations.
For extra clarity:
-
v5 uses
@azure/[email protected]which depends on@azure/[email protected] -
v6 uses
@azure/[email protected]which uses@azure/core-rest-pipelineinstead
I was suspecting that this might be the cause of the problem, but the tests demonstrated otherwise.
thanks @Link- for your analysis. I was able to inspect our traffic between our self-hosted runner and our corporate proxy, together with our network engineers and what we found is confusing. It looks like the action/upload-artifact (with v6) does make two CONNECT requests to the proxy.
First one looks fine:
CONNECT productionresultssa3.blob.core.windows.net:443 HTTP/1.1
Host: productionresultssa3.blob.core.windows.net:443
User-Agent: azsdk-net-Storage.Blobs/12.27.0 (.NET 8.0.22; Ubuntu 22.04.5 LTS)
HTTP/1.1 200 Connection established
The second one looks very unusual:
CONNECT productionresultssa3.blob.core.windows.net:443 HTTP/1.1
content-type: application/octet-stream
x-ms-version: 2025-11-05
content-length: 272
accept: application/xml
Host: productionresultssa3.blob.core.windows.net:443
Proxy-Connection: close
HTTP/1.1 400 Bad Request
The 2nd request is not obviously “RFC-illegal” purely on syntax (headers are allowed), but it’s absolutely outside what many forward proxies accept for CONNECT, especially:
- Content-Length on CONNECT (implies a body; CONNECT bodies are generally not expected)
- x-ms-version / content-type / accept on CONNECT (these look like Azure Blob HTTP request headers, which normally should appear inside the TLS tunnel, not on the CONNECT itself)
Also, we are asking ourselves, why does it make two CONNECT requests? Is that intended?
The 2nd request is not obviously “RFC-illegal” purely on syntax (headers are allowed), but it’s absolutely outside what many forward proxies accept for CONNECT
Good callout 🤔 I need to see what @azure/core-rest-pipeline is doing here.. We're definitely not enforcing anything of the sort in our http client.
We're facing the same issue in my company with self hosted runner with V6. I saw that there was an update with node on 24 and there is some changes concerning proxy management. Maybe trying to use proxy per default but generally speaking, artifacts does not leave network if file storage is in an internal network?