list_namespaced_event with resouce_version and resource_version_match not working as expected
What happened (please include outputs or screenshots): I try to get events with a resource_version older than a specific version and that does not seem to work as expected. I always get all the events. See code example where i try to demonstrate the issue. In the first run i get all events. In the second run i try to get events only older than the oldest version - 50
What you expected to happen: Second run should only have returned 50 events?
I might have completely misunderstood this api so feel free to close if this is expected.
How to reproduce it (as minimally and precisely as possible):
from kubernetes import client, config, watch
from kubernetes.client.rest import ApiException
import os
def pull_k8s_events(last_resource_version):
print(f"\tGetting the events NotOlderThen {last_resource_version}")
config.load_config()
api_instance = client.CoreV1Api()
namespace = 'my-namespace'
first_event_version = None
last_event_version = None
events = api_instance.list_namespaced_event(namespace, resource_version=last_resource_version, resource_version_match="NotOlderThan")
for event in events.items:
resource_version = event.metadata.resource_version
#print(f"{event.event_time if event.event_time is not None else event.last_timestamp} {event.metadata.name} {event.message} {event.metadata.resource_version}")
if first_event_version is None or resource_version < first_event_version:
first_event_version = resource_version
if last_event_version is None or resource_version > last_event_version:
last_event_version = resource_version
print(f"\tNumber of items retrieved: {len(events.items)}")
print(f"\tFirst Event Version: {first_event_version}")
print(f"\tLast Event Version: {last_event_version}")
return last_event_version
print(f"[Main]: init")
last_resource_version = pull_k8s_events(0)
new_resource_version = int(last_resource_version) - 50
print(f"[Main]: new resource version: {new_resource_version}")
pull_k8s_events(new_resource_version)
print("Ended.")
Anything else we need to know?: I tried this against kind 1.27 and eks 1.28
Environment:
- Kubernetes version (
kubectl version): v1.28.2 - OS (e.g., MacOS 10.13.6): Ubuntu Linux 22.04 LTS
- Python version (
python --version): Python 3.10.12 - Python client version (
pip list | grep kubernetes): 28.1.0
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.