sample-apiserver
sample-apiserver copied to clipboard
Running it stand-alone causes an error
According to README.MD, I made ca & client certificates, running it stand-alone causes 401 error.
curl -fv -k --cert-type P12 --cert client.p12:password https://localhost:8443/apis
- Trying 127.0.0.1:8443...
- Connected to localhost (127.0.0.1) port 8443 (#0)
- ALPN, offering h2
- ALPN, offering http/1.1
- successfully set certificate verify locations:
- CAfile: /etc/ssl/cert.pem
- CApath: none
- (304) (OUT), TLS handshake, Client hello (1):
- (304) (IN), TLS handshake, Server hello (2):
- (304) (IN), TLS handshake, Unknown (8):
- (304) (IN), TLS handshake, Request CERT (13):
- (304) (IN), TLS handshake, Certificate (11):
- (304) (IN), TLS handshake, CERT verify (15):
- (304) (IN), TLS handshake, Finished (20):
- (304) (OUT), TLS handshake, Certificate (11):
- (304) (OUT), TLS handshake, CERT verify (15):
- (304) (OUT), TLS handshake, Finished (20):
- SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
- ALPN, server accepted to use h2
- Server certificate:
- subject: CN=localhost@1656666955
- start date: Jul 1 08:15:55 2022 GMT
- expire date: Jul 1 08:15:55 2023 GMT
- issuer: CN=localhost-ca@1656666955
- SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
- Using HTTP2, server supports multiplexing
- Connection state changed (HTTP/2 confirmed)
- Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
- Using Stream ID: 1 (easy handle 0x120011e00)
GET /apis HTTP/2 Host: localhost:8443 user-agent: curl/7.79.1 accept: /
- Connection state changed (MAX_CONCURRENT_STREAMS == 1000)! < HTTP/2 401 < audit-id: 6e9350e3-c5b0-48f9-82fd-becdb0545783 < cache-control: no-cache, private < content-type: application/json < content-length: 157 < date: Sun, 03 Jul 2022 14:56:32 GMT
- The requested URL returned error: 401
- stopped the pause stream!
- Connection #0 to host localhost left intact curl: (22) The requested URL returned error: 401
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.