python
python copied to clipboard
FIX: Update watch resource_version on BOOKMARK events.
The BOOKMARK feature was not getting used properly.
it is expected that on the BOOKMARK event, the watch should update it's resource_version to the one in the BOOKMARK event.
Currently the BOOKMARK event was considered same as ERROR, because of which the watch resource_version was not updated.
we are now decoding BOOKMARK event as a object/dict, and also updating the watch.resource_version
/kind bug
Fixes #1729 Related: #23578 #21087
NONE
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: snjypl / name: sanjayp (f379bbabf33f239f49b0e1b7652b6212f3960910)
Welcome @snjypl!
It looks like this is your first PR to kubernetes-client/python 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-client/python has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: snjypl
To complete the pull request process, please assign yliaog after the PR has been reviewed.
You can assign the PR to them by writing /assign @yliaog
in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
since the resoureVersion is the only thing that matters in an BOOKMARK event.
i don't think, it is required or expected to convert the bookmark event to the return_type
since bookmark event will not have all the required fields.
when we treat it as a ERROR, we skip updating the watcher.resource_version to the one in bookmark event, which defeats the purpose of having the BOOKMARK feature.
and as far as de-serializing the bookmark event is concerned, the client should be fine with a dict object. the client is expected to check the event['TYPE'] and deal with the bookmark event as a dict
The current behaviour when allow_watch_bookmarks=True
and resource_version=<alreadyexpiredresourceversion
is to raise an ApiException (410)
, will this PR change that behaviour?
The current behaviour when
allow_watch_bookmarks=True
andresource_version=<alreadyexpiredresourceversion
is to raise anApiException (410)
, will this PR change that behaviour?
no, it won't. it is the expected behavior.
/assign @ecerulm
/assign @ecerulm
@roycaihw , what do you want me to do here?
It looks like you've been commenting on this PR already. Would you mind doing a first round of review? @ecerulm
@roycaihw I'm not familiar with the setup of this project and how contributions are made, this PR is based on master which seems 23.0.1 , but the latest release is 23.3.0 (tag v23.3.0) , are PR supposed to be based on the master, on the tag v23.3.0 or on the tag 23.6.0?
If I understand right , before with `allow_watch_bookmarks=True' you could have a loop like this
for event in watcher.stream(v1.list_namespaced_pod, namespace, allow_watch_bookmarks=True):
print(event['object'].metadata.resource_version
after the introduction of this PR the following will raise an AttributeError: 'dict' object has not attribute 'metadata'
when it reaches a bookmark event, event['object'].metadata
is of type V1ObjectMeta
in the other events but it's of type dict
for bookmark event.
That can potentially break the code of people already using allow_watch_bookmarks=True
today when they upgrade to a version containing this PR code.
I'm not sure about the rules about backwards compatibility that this projects follows, it may be ok. I think somebody else should review this, @roycaihw , I'm not that familiar with the code base.
If I understand right , before with `allow_watch_bookmarks=True' you could have a loop like this
for event in watcher.stream(v1.list_namespaced_pod, namespace, allow_watch_bookmarks=True):
print(event['object'].metadata.resource_version
after the introduction of this PR the following will raise an AttributeError: 'dict' object has not attribute 'metadata'
when it reaches a bookmark event, event['object'].metadata
is of type V1ObjectMeta
in the other events but it's of type dict
for bookmark event.
That can potentially break the code of people already using allow_watch_bookmarks=True
today when they upgrade to a version containing this PR code.
I'm not sure about the rules about backwards compatibility that this projects follows, it may be ok. I think somebody else should review this, @roycaihw , I'm not that familiar with the code base.
@ecerulm
If I understand right , before with `allow_watch_bookmarks=True' you could have a loop like this
for event in watcher.stream(v1.list_namespaced_pod, namespace, allow_watch_bookmarks=True): print(event['object'].metadata.resource_version
No, even with the latest release you will get the same error. even now bookmark event is returned as dict.
this PR does not change that. so it is not backward incompatible.
if the user has opted for the bookmark feature by passing allow_watch_bookmarks
then this would be the recommenced way of handling the events:
Note: this is the existing behavior. this PR does not change it.
for event in watcher.stream(v1.list_namespaced_pod, namespace, allow_watch_bookmarks=True):
if event['TYPE'] == 'ERROR':
# handle error
if event['TYPE'] == 'BOOKMARK':
# handle bookmark
print(event['object'].metadata.resource_version
@roycaihw @fabianvf @yliaog
is there anything i could do to help with the review? do i need to add any notes?
it will be really great to have this feature.
@roycaihw ping ! :D
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.