vault icon indicating copy to clipboard operation
vault copied to clipboard

Add support for forwarded Tls-Client-Cert

Open JasonN3 opened this issue 1 year ago • 14 comments

This is to allow cert auth through a reverse proxy that terminates the SSL connection.

The changes link off of the existing processes that are used to process X-Forwarded-For header and related headers. If X-Forwarded-* headers are accepted by the existing code, the existing request will be modified so the forwarded client certificate will be treated like it was the client certificate in the request. This way all existing checks still apply. This is the same process that was already being used for the remote address. Since the header that includes the forwarded certificate differs between reverse proxies, the header is configurable using the option x_forwarded_for_client_cert_header. This approach was chosen because it requires minimal changes and takes advantage of the existing code.

I was able to compile and test this using Traefik and the header X-Forwarded-Tls-Client-Cert.

Resolves: #12178

JasonN3 avatar Sep 22 '22 01:09 JasonN3

CLA assistant check
All committers have signed the CLA.

hashicorp-cla avatar Sep 22 '22 01:09 hashicorp-cla

Greetings, @JasonN3! Thanks for your submission.

Have you had a chance to look at our Contribution Guidelines? Please make sure to review and update the PR to be in compliance with these guidelines.

I will follow up with a more thorough review of the PR, but figured I'd flag the minor things to get the ball rolling on this.

In order to kick off CI, you'll need a changelog entry. The CATEGORY seems like improvement, since we'll be enabling a new usecase.

Other than that, a bit more description would help provide a bit more context for a review. Particularly interested in:

Your pull request should have a description of what it accomplishes, how it does so, and why you chose the approach you did. PRs should include unit tests that validate correctness and the existing tests must pass. Follow-up work to fix tests does not need a fresh issue filed.

Please let us know if you have any questions.

mpalmi avatar Sep 22 '22 11:09 mpalmi

@JasonN3 To clarify @mpalmi questions a little (if I may...) I think I read #12178 as applying to the CertAuth plugin, whereas this appears to help Agents' mutual TLS.

I think the former only needs to have the mount tuning enabled, and then read the request headers rather than the connection TLS cert.

It'd be great to get clarity as to use case.

The other question I have is of precedence. Sometimes you'll want the LB to do mutual TLS on both ends of the connection (mTLS to client and mTLS to Vault)... But obviously there's issues with defaulting to the header by default over the mTLS value (if it is end-client supplied, this is bad). It might also warrant a discussion of if the header-provided cert should be re-verified by Vault rather than assumed to be already validated by the LB/...

cipherboy avatar Sep 22 '22 13:09 cipherboy

Deployment failed with the following error:

The provided GitHub repository does not contain the requested branch or commit reference. Please ensure the repository is not empty.

vercel[bot] avatar Sep 22 '22 13:09 vercel[bot]

@mpalmi, I hadn't read that. I've updated my description and added the changelog entry. Let me know if I need to add more detail to either one or if there's anything else I missed.

@cipherboy, the intent for that issue was to allow client certificate authentication through a reverse proxy. To expand on the use case and the reason I submitted that issue initially, I have a CA server that each of my servers gets a certificate from. I want my servers to be able to query Vault to get a secret automatically using that certificate as authentication. Prior to this change, I could not do that because my Vault server is behind a LB. The LB would terminate the SSL connection and start a new connection. This would cause the client cert to be in the header X-Forwarded-Tls-Client-Cert, which wasn't being read by the cert authentication method. Do you have any examples where Vault can handle mTLS other than in cert auth? As for it being re-verified by Vault, it is. The changes I made do not modify the code that verifies the certificate. It just moves the certificate from a header to the root of the request. The rest of the processes that verify the certificate still apply. I did test this by testing the authentication using a certificate issued by a CA that Vault was not told to trust and authentication rightfully failed.

JasonN3 avatar Sep 22 '22 13:09 JasonN3

Ah, so I see, this does indeed do client cert now that I trace it. A test might be good in that regard ;-)

I was thinking https://www.vaultproject.io/docs/agent/autoauth/methods/cert was it, but I believe this just allows Agent to interpose and provide its own cert for cert auth, rather than using the client's (and passing that through for Vault).

cipherboy avatar Sep 22 '22 14:09 cipherboy

I'm not sure what test can be added. In order to test just the header being read and moved, it would need a complete certificate since the certificate is being parsed and I'm not sure if there's one available during the testing.

Since this change enables cert auth to function with the Vault server behind a LB, it would allow for auto-auth using the cert method as well. Nothing would change on the client side. When the client makes the request to the LB, the LB rewrites the request. This change is just to allow the Vault server to rewrite the request back to what the client originally made before being processed.

JasonN3 avatar Sep 22 '22 14:09 JasonN3

@JasonN3 I'd probably add one in the cert auth tests:

https://github.com/hashicorp/vault/blob/main/builtin/credential/cert/backend_test.go

There's a lot of test certificates available to use, and when you call vault.NewTestCluster you can add a new option to the test config which enables this header and adds it to the listener:

https://github.com/hashicorp/vault/blob/87801ecf1f7413bd288d82016f55c84d46ea4a99/vault/testing.go#L1143 https://github.com/hashicorp/vault/blob/87801ecf1f7413bd288d82016f55c84d46ea4a99/vault/testing.go#L1472

You'll have to add a header to the client and invalidate the token, e.g.:

https://github.com/hashicorp/vault/blob/87801ecf1f7413bd288d82016f55c84d46ea4a99/builtin/logical/pki/backend_test.go#L5214-L5215 https://github.com/hashicorp/vault/blob/87801ecf1f7413bd288d82016f55c84d46ea4a99/api/client.go#L970

And then re-auth with the header, probably by direct Logical().Write(...) would be my idea.

My 2c. :-)

cipherboy avatar Sep 22 '22 14:09 cipherboy

I ended up just copying the test certificate from those test and creating 2 tests that are purely focused on the modified code. Test 1 (reject_bad_cert_in_header) will verify that a certificate that is not properly formatted will be rejected. Test 2 (pass_cert) will verify that a certificate that is properly formatted will be moved in to r.TLS.PeerCertificates.

Verification that the certificate is valid was not part of this modification as that is done elsewhere using the certificates normally provided in r.TLS.PeerCertificates by the client.

JasonN3 avatar Sep 23 '22 19:09 JasonN3

Thanks @JasonN3! I'll take another look by Friday, feel free to ping me if I haven't gotten around to it...

cipherboy avatar Sep 27 '22 13:09 cipherboy

I found a minor logic error that shouldn't affect anything, but I fixed it anyway.

Also, @cipherboy, ping

JasonN3 avatar Oct 03 '22 15:10 JasonN3

\o sorry about the delay @JasonN3.

I think we're going to need to discuss this a little more internally first. We've definitely talked about the idea before and would like to include the feature, but I think we need to have a few more conversations about implementation first. In particular, I think we'd worried about header use vs connection use, and what the final deployment architecture looks like (how would you ensure clients can't directly connect to the Vault system and provide header values, would you do a separate mTLS to the LB, &c).

I'll let you know what comes of those discussions, they'll hopefully occur in the next couple of weeks.

cipherboy avatar Oct 10 '22 19:10 cipherboy

I understand the need to discuss it further. If there's any concerns, feel free to post them. I might have an answer for you.

As for a client directly connecting to the Vault server and providing the header, that's not possible unless the server owner specifies XForwardedForAuthorizedAddrs to include the client source addresses or the LB is configured to forward that header when received. This is the same limitations that exist for overriding the source address using X-Forwarded-For. Using the provided change, a TLS connection between the LB and the server is still required as the client certificate auth method mandates a TLS connection. It just won't use mTLS because the LB doesn't have a client certificate. Even if you were to add in mTLS between the LB and the Vault server, it wouldn't change anything the client could do. All that mTLS connection would do is verify the LB but then you still need to verify the client certificate that was passed in the header. The LB can't use the client certificate that was passed to it by the client because the LB doesn't have the private key.

The process it goes through:

  1. The client makes a request to the LB using https and specifies the client certificate's public key
  2. The LB accepts (either blindly or after verifying it against a CA cert) the client certificate and moves the certificate to a header. The client will have to have a valid private key regardless of whether the certificate is checked at this point because otherwise it won't be able to encrypt the data for the TLS connection and communication will fail. The LB should also be configured to override/erase the Tls-Client-Cert (or equivalent) header if it was passed in.
  3. The LB makes a new connection to the Vault server using https and specifies the client certificate's public key in the header
  4. The Vault server ensures the source (the LB) is in the XForwardedForAuthorizedAddrs list, makes sure the connection from the LB is using https, and validates the client certificate is valid and allowed to log in. It will then generate the token with the appropriate permissions and pass it back to the LB to be passed back to the client.

JasonN3 avatar Oct 10 '22 19:10 JasonN3

@cipherboy, how'd the internal discussions go? Are there any changes you guys would like me to make in order to better meet any requirements?

JasonN3 avatar Nov 30 '22 15:11 JasonN3

Sorry about the delay. We were initially worried about whether or not useful validation would occur on the chain, but we've been unable to reproduce the attack we were thinking of.

Glad to hear the attacks failed. This change shouldn't open up new forms of attacks, but it's definitely better to test it.

Could you send an unrelated chain in addition to a cert you control, wherein the unrelated chain was a valid trusted chain by Vault, but the cert you controlled would sign the TLS handshake

The certificate for the https communication is separate from the peer certificate that is used for authentication, so they can be and should be unrelated. It would be a bad idea to have a public CA trusted to validate your authentication since you can't control what certs they will sign, but you will want your https traffic to be signed by a public CA (or at least a CA the clients already trust) to validate your server. For the tests I've done, my Vault server is signed by a public CA and authentication is signed by an internal CA.

I think ideally, we'd like to see the LB's credentials persisted rather than potentially overwritten here, to allow us to do LB credential pinning in the future. I think as such, it makes sense to put this directly on the request, rather than on the req.TLS.PeerCertificates object, to allow plugins and operators to decide which they want to use.

I'm confused by this. What credentials would the LB normally pass? As far as I can find, a LB can't form a mTLS connection with a backend server. It can only pass along the mTLS connection that the client tried for form with the LB or form a new TLS connection to the backend server. Do you know of any LBs that can initiate a mTLS connection to the backend server? If not, req.TLS.PeerCertificates can only be used by the client and not a LB so nothing would be lost when it is updated. As for plugins or operators detecting if it was used, they can still check for the header. The header is never removed. It is just copied to where the certificate normally would be if there wasn't a LB so the rest of the cert authentication code doesn't need to be modified to support this situation and can behave the same regardless of whether a LB is used.

JasonN3 avatar Jan 24 '23 17:01 JasonN3

Could you send an unrelated chain in addition to a cert you control, wherein the unrelated chain was a valid trusted chain by Vault, but the cert you controlled would sign the TLS handshake

The certificate for the https communication is separate from the peer certificate that is used for authentication, so they can be and should be unrelated. It would be a bad idea to have a public CA trusted to validate your authentication since you can't control what certs they will sign, but you will want your https traffic to be signed by a public CA (or at least a CA the clients already trust) to validate your server. For the tests I've done, my Vault server is signed by a public CA and authentication is signed by an internal CA.

Right, this was the theoretical attack: can a client of the LB send a two part chain (valid client cert, unrelated attacker cert), with the LB validating the TLS connection is signed by the attacker cert, thus passing through to Vault, and with cert-auth succeeding but verifying and approving based on the valid client cert -- even though there was no signature by its corresponding private key. The answer is no, as TLS stacks will strip out unrelated certs from the validation (at least, from our testing, Go's and OpenSSL's will).


I'm confused by this. What credentials would the LB normally pass? As far as I can find, a LB can't form a mTLS connection with a backend server.

Why can't a LB have a separate mTLS to Vault? That's a common scenario, initiating a new mTLS from LB to a backend service with its own credential, to ensure the data is still protected and authenticated (preventing non-LB from contacting backend service directly). This is perhaps more common when the service is over the network, rather than a LB running on the same instance as the backend service. See e.g., https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/ .

cipherboy avatar Jan 24 '23 17:01 cipherboy

I'm confused by this. What credentials would the LB normally pass? As far as I can find, a LB can't form a mTLS connection with a backend server.

Why can't a LB have a separate mTLS to Vault? That's a common scenario, initiating a new mTLS from LB to a backend service with its own credential, to ensure the data is still protected and authenticated (preventing non-LB from contacting backend service directly). This is perhaps more common when the service is over the network, rather than a LB running on the same instance as the backend service. See e.g., https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/ .

I hadn't come across that situation. Normally I just see TLS between the LB and server and not mTLS. In that case, yes, it would cause some information to be lost. However, it would depend on where you plan on verifying the mTLS cert. It should probably be verified before any of the X-Forwarded headers are processed. That way there is less code run before the LB client cert would be rejected. In that case it wouldn't matter if the LB client cert is lost. Alternatively, I could prepend the client cert to the list of PeerCertificates, which is the commit I just pushed (b7a1ffec43976284b21521d4a7f2e66bd00b5358). Only the first certificate is processed for login (builtin/credential/cert/path_login.go line 230) so any additional certificates would still be available, but the current code would ignore them.

JasonN3 avatar Jan 24 '23 18:01 JasonN3

@cipherboy, just checking in. Any next steps you need from me?

JasonN3 avatar Feb 13 '23 16:02 JasonN3

@JasonN3 Nope! Thank you again! :-)

We're likely going to merge this soon, I was waiting for 1.13 to branch so that we'd have time to make the changes I've discussed above (w.r.t. using a different PeerCertificates location to allow distinguishing the LB's mTLS certs from the remote peer's and also allowing this feature to work with non-TLS enabled LBs).

cipherboy avatar Feb 14 '23 13:02 cipherboy

Would love this feature also for my deployment. Thank you!

se-chris-thach avatar Apr 19 '23 13:04 se-chris-thach

Hey @JasonN3

Don't know if this is the place, but I've patched this into 1.13.2 but it fails to parse the certificate when NGINX is in front from Vault. Don't know if this is a issue with my setup or if NGINX just sends the certificate in a different way. Here is my config files.

Vault agent

vault {
    address = "https://vault.intra.hugorodrigues.xyz"
}

auto_auth {
    method {
        type = "cert"

        config {
            client_cert = "/etc/vault/cert.pem"
            client_key = "/etc/vault/key.pem"
        }
    }
}

cache {
}

api_proxy {
    use_auto_auth_token = "force"
}

listener "tcp" {
    address = "127.0.0.1:8202"
    tls_disable = true
}

Vault

storage "file" {
	path = "/var/lib/vault"
}

ui = true

listener "tcp" {
    tls_cert_file = "/etc/vault/vault.crt"
    tls_key_file = "/etc/vault/vault.key"
    tls_min_version = "tls13"
    x_forwarded_for_authorized_addrs = "127.0.0.1"
    x_forwarded_for_client_cert_header = "X-SSL-CERT"
}

api_addr = "https://127.0.0.1:8200"

NGINX server block

server {
    
    listen 443 ssl http2;
    server_name vault.intra.hugorodrigues.xyz;

    ssl_certificate /etc/nginx/ssl/vault.intra.hugorodrigues.xyz.crt;
    ssl_certificate_key /etc/nginx/ssl/vault.intra.hugorodrigues.xyz.key;
    client_max_body_size 2k;
    ssl_verify_client optional_no_ca;
    location / {
        proxy_pass https://127.0.0.1:8200;
        proxy_set_header Host $host;
        proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_ssl_protocols TLSv1.3;
    }
}

Calling vault using curl via NGINX

{
  "errors": [
    "failed to parse the client certificate: x509: malformed certificate"
  ]
}

Headers that NGINX sends to vault

Host: vault.intra.hugorodrigues.xyz
X-SSL-CERT: -----BEGIN%20CERTIFICATE-----%0AMIICmzCCAk2gAwIBAgIUQmWTANF%2F2RW8W8itJ9fmf%2FfMMjgwBQYDK2VwMGAxCzAJ%0ABgNVBAYTAlBUMRcwFQYDVQQHEw5TYW1vcmEgQ29ycmVpYTESMBAGA1UECQwJU2Fu%0AdGFyw6ltMSQwIgYDVQQDExtSb2RyaWd1ZXMgSW50ZXJtZWRpYXRlIENBIDEwHhcN%0AMjMwNTEyMTUxNTE1WhcNMjQwNTExMTUxNTQ1WjCBrzELMAkGA1UEBhMCUFQxEjAQ%0ABgNVBAgMCVNhbnRhcsOpbTEXMBUGA1UEBxMOU2Ftb3JhIENvcnJlaWExHTAbBgNV%0ABAkTFFJ1YSBkYSBBbGVncmlhIDE0IDFFMREwDwYDVQQREwgyMTM1LTAyNjESMBAG%0AA1UEChMJUm9kcmlndWVzMS0wKwYDVQQDEyRzZWN1cml0eXB2MDEuaW50cmEuaHVn%0Ab3JvZHJpZ3Vlcy54eXowWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQQIPXeH1EA%0ACfMjST%2F%2FCAq23OemU%2F51yJKR9ljZrpMAMDyKmXmlW6yBzQoWQATalKohxpm4NrHZ%0AvvPQDQNk3zdno4GZMIGWMA4GA1UdDwEB%2FwQEAwIDqDATBgNVHSUEDDAKBggrBgEF%0ABQcDAjAdBgNVHQ4EFgQUjiYjBOFbWnOmvYa6TKqpuOM3Sq4wHwYDVR0jBBgwFoAU%0ACp6taklnERBo0%2BPgxl3vOYaLphYwLwYDVR0RBCgwJoIkc2VjdXJpdHlwdjAxLmlu%0AdHJhLmh1Z29yb2RyaWd1ZXMueHl6MAUGAytlcANBAL4vJC2mizp7PKpAyHr9d%2FDB%0AVt4ZLlYNMc%2FaEcANou0uX70FHNwgLScU%2FNmbtZzcoYS7G1F0Ch1stH33p2trhAg%3D%0A-----END%20CERTIFICATE-----%0A
X-Real-IP: 10.1.10.215
X-Forwarded-For: 10.1.10.215
X-Forwarded-Proto: https
Connection: close
user-agent: curl/8.1.0
accept: */*

hmrodrigues avatar May 22 '23 09:05 hmrodrigues

@hmrodrigues, try replacing $ssl_client_escaped_cert with $ssl_client_raw_cert. In your current config, Nginx is modifying the certificate, which is why it's not able to be parsed.

JasonN3 avatar May 22 '23 11:05 JasonN3

@hmrodrigues, try replacing $ssl_client_escaped_cert with $ssl_client_raw_cert. In your current config, Nginx is modifying the certificate, which is why it's not able to be parsed.

If I change to raw I receive a 400 Bad Request

Headers sent by NGINX

Host: vault.intra.hugorodrigues.xyz
X-SSL-CERT: -----BEGIN CERTIFICATE-----

MIICmzCCAk2gAwIBAgIUQmWTANF/2RW8W8itJ9fmf/fMMjgwBQYDK2VwMGAxCzAJ
BgNVBAYTAlBUMRcwFQYDVQQHEw5TYW1vcmEgQ29ycmVpYTESMBAGA1UECQwJU2Fu
dGFyw6ltMSQwIgYDVQQDExtSb2RyaWd1ZXMgSW50ZXJtZWRpYXRlIENBIDEwHhcN
MjMwNTEyMTUxNTE1WhcNMjQwNTExMTUxNTQ1WjCBrzELMAkGA1UEBhMCUFQxEjAQ
BgNVBAgMCVNhbnRhcsOpbTEXMBUGA1UEBxMOU2Ftb3JhIENvcnJlaWExHTAbBgNV
BAkTFFJ1YSBkYSBBbGVncmlhIDE0IDFFMREwDwYDVQQREwgyMTM1LTAyNjESMBAG
A1UEChMJUm9kcmlndWVzMS0wKwYDVQQDEyRzZWN1cml0eXB2MDEuaW50cmEuaHVn
b3JvZHJpZ3Vlcy54eXowWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQQIPXeH1EA
CfMjST//CAq23OemU/51yJKR9ljZrpMAMDyKmXmlW6yBzQoWQATalKohxpm4NrHZ
vvPQDQNk3zdno4GZMIGWMA4GA1UdDwEB/wQEAwIDqDATBgNVHSUEDDAKBggrBgEF
BQcDAjAdBgNVHQ4EFgQUjiYjBOFbWnOmvYa6TKqpuOM3Sq4wHwYDVR0jBBgwFoAU
Cp6taklnERBo0+Pgxl3vOYaLphYwLwYDVR0RBCgwJoIkc2VjdXJpdHlwdjAxLmlu
dHJhLmh1Z29yb2RyaWd1ZXMueHl6MAUGAytlcANBAL4vJC2mizp7PKpAyHr9d/DB
Vt4ZLlYNMc/aEcANou0uX70FHNwgLScU/NmbtZzcoYS7G1F0Ch1stH33p2trhAg=
-----END CERTIFICATE-----

For what I can figure, every header after X-SSL-CERT isn't sent by NGINX. If I put proxy_set_header X-SSL-CERT at the end, the Connection, user-agent and accept are missing as well

Host: vault.intra.hugorodrigues.xyz
X-Real-IP: 10.1.10.215
X-Forwarded-For: 10.1.10.215
X-Forwarded-Proto: https
X-SSL-CERT: -----BEGIN CERTIFICATE-----

MIICmzCCAk2gAwIBAgIUQmWTANF/2RW8W8itJ9fmf/fMMjgwBQYDK2VwMGAxCzAJ
BgNVBAYTAlBUMRcwFQYDVQQHEw5TYW1vcmEgQ29ycmVpYTESMBAGA1UECQwJU2Fu
dGFyw6ltMSQwIgYDVQQDExtSb2RyaWd1ZXMgSW50ZXJtZWRpYXRlIENBIDEwHhcN
MjMwNTEyMTUxNTE1WhcNMjQwNTExMTUxNTQ1WjCBrzELMAkGA1UEBhMCUFQxEjAQ
BgNVBAgMCVNhbnRhcsOpbTEXMBUGA1UEBxMOU2Ftb3JhIENvcnJlaWExHTAbBgNV
BAkTFFJ1YSBkYSBBbGVncmlhIDE0IDFFMREwDwYDVQQREwgyMTM1LTAyNjESMBAG
A1UEChMJUm9kcmlndWVzMS0wKwYDVQQDEyRzZWN1cml0eXB2MDEuaW50cmEuaHVn
b3JvZHJpZ3Vlcy54eXowWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQQIPXeH1EA
CfMjST//CAq23OemU/51yJKR9ljZrpMAMDyKmXmlW6yBzQoWQATalKohxpm4NrHZ
vvPQDQNk3zdno4GZMIGWMA4GA1UdDwEB/wQEAwIDqDATBgNVHSUEDDAKBggrBgEF
BQcDAjAdBgNVHQ4EFgQUjiYjBOFbWnOmvYa6TKqpuOM3Sq4wHwYDVR0jBBgwFoAU
Cp6taklnERBo0+Pgxl3vOYaLphYwLwYDVR0RBCgwJoIkc2VjdXJpdHlwdjAxLmlu
dHJhLmh1Z29yb2RyaWd1ZXMueHl6MAUGAytlcANBAL4vJC2mizp7PKpAyHr9d/DB
Vt4ZLlYNMc/aEcANou0uX70FHNwgLScU/NmbtZzcoYS7G1F0Ch1stH33p2trhAg=
-----END CERTIFICATE-----

A comment on a issue at https://github.com/kubernetes/ingress-nginx/issues/1714#issuecomment-345728734 says that $ssl_client_raw_cert doesn't have a valid header format. That may explain the missing headers issue

hmrodrigues avatar May 23 '23 10:05 hmrodrigues

Hmm, we might want to enable base64url decoding of headers then.

Just for transparency, I didn't get as much time this release as I'd like to look at this, so sadly I think this will have to slip to early in 1.15 cycle.

cipherboy avatar May 23 '23 11:05 cipherboy

Passing both the raw header and the result of url.QueryUnescape to base64.URLEncoding.DecodeString also fails with malformed certificate

hmrodrigues avatar May 23 '23 13:05 hmrodrigues

I found the issue. It's an issue with how I'm checking if there was an error when base64 decoding. Traefik, the LB I use, base64 encodes it and then url encodes it. The reverse is done when trying to parse the certificate. However, when I decoded the base64, I checked that there was content returned instead of that an error wasn't returned. Normally that would mean the same thing, but decodestring returns as far as it got before it ran in to an error. Line 534 of handler.go just needs to be adjusted to check that err != nil and then attempt to parse the certificate regardless of whether it was able to base64 decode it.

JasonN3 avatar May 23 '23 13:05 JasonN3

yup now it fails to failed to decode the client certificate: illegal base64 data at input byte 0 because nginx doesn't base64 encode the certificate.

hmrodrigues avatar May 23 '23 13:05 hmrodrigues

@JasonN3 Which version of Traefik are you using? It looks like it matters.

It appears that how Traefik encodes the header was changed in this commit https://github.com/traefik/traefik/commit/2c550c284d19b02e2ff4c9c9e066cf548349a033, affecting Traefik v2.9.2 and newer.

I forked your branch JasonN3:clientcert and made some changes to adhere to this new encoding scheme and it works with Traefik v2.10.1 (latest at the time of writing this). See my changes here.

The difference in Traefik v2.9.2 or newer is that the certificate PEM is no longer URL-escaped and the new lines and BEGIN-END statements are removed.

If someone has a better way of decoding this string, I'm totally open. It also kind of seems that how the certificate is encoded in this header is proxy dependent (Traefik, nginx, etc) based on all the comments.

se-chris-thach avatar May 23 '23 13:05 se-chris-thach

If someone has a better way of decoding this string, I'm totally open. It also kind of seems that how the certificate is encoded in this header is proxy dependent (Traefik, nginx, etc) based on all the comments.

Maybe add a second configuration to specify the decode method based on the used proxy?

hmrodrigues avatar May 23 '23 14:05 hmrodrigues

Also regarding nginx, just escaping the certificate also fails to parse it. Can it be related to being sent as PEM by nginx? ParseCertificate says ASN.1 DER

hmrodrigues avatar May 24 '23 10:05 hmrodrigues