kube-storage-version-migrator
kube-storage-version-migrator copied to clipboard
Umbrella issue: migration performance improvement
Talked with @caesarxuchao. There are a few potential performance improvement we can do to the migrator controller:
- [ ] Don't retry on UPDATE conflict. A conflict means the object has been written by another client, and has the up-to-date storage version.
- [ ] Don't deserialize GET result before UPDATE. We already skip doing conversion to tolerate potential client-server skew. We can skip doing (de)serialization and just putting the data blob we got.
- [x] ~Don't GET. Same as don't retry on UPDATE, we can skip GET and use the result from the LIST.~ We do this already.
/priority important-long-term /kind cleanup
@roycaihw: The label(s) priority/important-long-term cannot be applied, because the repository doesn't have them
In response to this:
Talked with @caesarxuchao. There are a few potential performance improvement we can do to the migrator controller:
- [ ] Don't retry on UPDATE conflict. A conflict means the object has been written by another client, and has the up-to-date storage version.
- [ ] Don't deserialize GET result before UPDATE. We already skip doing conversion to tolerate potential client-server skew. We can skip doing (de)serialization and just putting the data blob we got.
- [x] ~Don't GET. Same as don't retry on UPDATE, we can skip GET and use the result from the LIST.~ We do this already.
/priority important-long-term /kind cleanup
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/priority important-longterm
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen