kube-openapi
kube-openapi copied to clipboard
Collect params for OpenAPI V3
goos: darwin
goarch: arm64
pkg: k8s.io/kube-openapi/pkg/spec3
BenchmarkSharedParamsDeserialize/apiv1spec.json-8 98 12138503 ns/op 12245326 B/op 60819 allocs/op
BenchmarkSharedParamsDeserialize/apiv1spec.sharedparams.json-8 100 10520641 ns/op 9892803 B/op 68528 allocs/op
BenchmarkSharedParamsDeserialize/appsv1spec.json-8 200 5920978 ns/op 5708092 B/op 27386 allocs/op
BenchmarkSharedParamsDeserialize/appsv1spec.sharedparams.json-8 230 5162531 ns/op 4871387 B/op 30219 allocs/op
BenchmarkSharedParamsDeserialize/authorizationv1spec.json-8 2575 466165 ns/op 537114 B/op 2521 allocs/op
BenchmarkSharedParamsDeserialize/authorizationv1spec.sharedparams.json-8 2620 458194 ns/op 526593 B/op 2664 allocs/op
PASS
ok k8s.io/kube-openapi/pkg/spec3 9.266s
k/k diff PR: https://github.com/kubernetes/kubernetes/pull/120242
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: Jefftree
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [Jefftree]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Can we get a quick performance impact of this?
https://github.com/kubernetes/kubernetes/pull/120242/commits/afa3e4574edd78b437c3a76637153d1bcb1531a3 shows the OpenAPI diff.
For sizes: /api/v1 showed some nice improvement from 1.7MB -> 1MB The other endpoints had some reduction between 10-20%
I will gather some additional numbers on cpu/mem differences if any exist.
This PR is still WIP, so feel free to hold off on review.
Yeah, I think what you could do is use these new OpenAPI and re-run some of the benchmarks we have to see CPU/Memory allocation reductions.
@apelisse Attached some benchmarking data. ~15-25% improvement in cpu+mem for the larger gvs and not much difference for the smaller ones. Total # allocs went up slightly, though the total memory impact of the allocs was improved.
/cc @sttts This was pretty much a copy and paste of the v2 param collector with one change.
- The fieldValidation and dryRun unsharing special condition was removed. The two parameters are used until v1.26 and v1.27 respectively and this change will go into 1.29 which should be outside the 1 version skew that kubectl supports?
Do you know why allocs are going up? I was going to try and find suggestions but let's forget it, it's probably not a big deal anyway.
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@apelisse @sttts This PR will need to be rebased, but is this change something we're okay with landing?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.