Support status.conditions
Problem
There is a convention about using status.conditions for higher-level status reporting from a controller. As it should a list of objects, it becomes harder to update one element in the list with current patch parameter. In addition, some of the object's elements can be null, but None is supposed to be a deletion marker, and I'm getting a warning on each update of a resource. The convention also states that "Controllers should apply their conditions to a resource the first time they visit the resource".
Proposal
- I believe this should be a part of the framework, but may be a separate package. Speaking in term of handlers arguments, I'm not sure if
conditionsshould be a part ofstatus(status.conditions,patch.status.conditions) or it is worth an additional argument. - For better usability, the list of objects should be converted into a dict before calling handlers, and then back afterwards.
- An additional helper function/decorator may help to initialize conditions which are provided by an operator.
- An ability to update resource's conditions at any point of handler execution my be very useful, especially in cases when the handler works a while.
- As some resources may want to have a different set of fields than in the convention, it may require at least two additional classes: (1) base class with all functions to do work, (2) "default" condition which extends base class and has all fields from the convention. The second class should also help to update all timestamp fields (like status transition time).
import kopf
@kopf.on.create('kopfexamples')
# Decorator ensures all conditions exist (set them to defaul if don't)
@kopf.conditions('kopfexamples', conditions = ['Created', 'Ready'], factory = kopf.ConditionDefault)
def create_fn(spec, name, status, patch, conditions, **kwargs):
conditions['Created'].set(status = False, reason = "JustStarted").sync()
# do some work
conditions['Created'].set(status = True).sync()
# do some work
conditions['Ready'].set(status = True)
I would like to help with this feature, but I have a limited knowledge on internals and don't know if this is even feasible.
Checklist
- [x] Many users can benefit from this feature, it is not a one-time case
- [x] The proposal is related to the K8s operator framework, not to the K8s client libraries
Hello. Thanks for the proposal! Conceptually, I'm totally "for" that feature. The API/DSL needs some thorough discussion though — before putting it into the public promises of the framework.
Conditions were somewhere on the edge of my attention — as I never had any need for them. The only time I used conditions was to track that a pod finishes. As such, I probably cannot fully judge the proposed syntax and nuances — I will need some time to dive into the official docs and existing practices.
It would be good if you draft that with a few scenarios (aka "user stories") and code snippets that would solve those cases. I mean, both for filtering the handlers to execute and for configuring them to different resources.
The feature would fit well into the framework. Kopf already operates the finalizers for its own needs, it tracks the object diffs with its own annotations, so it can operate the conditions too.
Now, to the details. In the provided snippet, I see a somewhat non-typical use of handlers with lengthy work performed in them. Kopf's concept is that handlers are atomic units of work, they are either fully done or fully retried, and hopefully, they are short (i.e. take seconds, not minutes-hours; for those, there are daemons). If there are several units or steps of work to be performed, those should be probably separate handlers.
While this is not a problem per se and such an operator can work, this approach induces the .sync() design, which, I believe, is an API call to store the condition. I.e. the condition changes several times during a single atomic unit of work. Besides, this will be difficult to implement in async handlers.
It is also not clear to me, why are the conditions declared on a handler. What will happen if there are 2 handlers for the same resource (maybe with different filters)? Can they have separate conditions? Can they share the conditions? User stories could clarify this confusion.
As a maybe bizarre branch of brainstorming, can conditions be made as a separate type of handlers? E.g. as in-memory indexes are done now (@kopf.index). Would it be expressive enough? Or is such an approach more confusing than helpful?
Going on with brainstorming, can handlers actually BE those conditions? If configured accordingly. Now, they dump their result to "status.{handle_id}", but this is a legacy from the dawn of the framework. Can on-creation/on-update/on-resume/etc handlers (each of them if there are many) be mapped to conditions somehow? I.e., not the imperative management of conditions, but rather declarative, delegating the actual management of values to the framework.
All in all, I would be happy to help you to design this feature so that it aligns with other Kopf's concepts and feel, and will point to places where it can be implemented.
Hi, thanks for the detailed reply! I'm still in the begging of my K8s operators journey. All things from the proposal part are just my thoughts based on a very limited view of the whole picture.
In the snippet I provided I tried to cover all items in the proposal, it doesn't relate to any of my use cases, and also I didn't think to introduce a new type of handler or change the behavior of existing one. If the order of handler guaranteed, the @kopf.conditions can be easily replaced with one which ensure all required statutes exist. If a new handler is added it might be more expressive. One of the reasons to have a separate handler/decorator which runs before others is this will help to avoid checking that some condition object is already there, in the list.
Speaking about conditions, they are usually per resource, but as the convention says, it is better if controller creates its conditions at time it sees the resource (so separate handler which ensures all conditions works better here). Nothing stops different handlers to update different set of resource conditions. Handlers' return can also update set of conditions, but I don't think they can be matched 1:1. For example, each of on-creation/on-update/on-resume can update Ready condition, because of unavailability of external resource which is needed to handle these events. Also I'm not sure that declarative approach can simplify anything. We might benefit if can attach some metadata to an exception, so it can update specified condition with, for example, status = False, reason = "Whatever", message = str(err).
Why do you think it would be difficult to implement .sync() in async handlers? Can't we call it as await .sync(). I guess if this function would be declared as async, and user is forced to make an async handler to be able to use it, it is pretty fine.
In general, anything can add it's own conditions to any resource, so operator should preserve unknown conditions as is (not sure if this is feasible to update single condition with current JSON merge-patch). Potentially, users may want to have a different set of fields for its' resource conditions than stated in the convention. On the other hand, if "standard" condition object is chosen it makes sense to implement updating of timestamp fields automatically.
@nolar any update on this issue? Looking to use conditions as well.
I need this functionality. Am willing to code it. I have a bunch of example curl that shows the way to adjust status. Status is protected from overwrite in the regular UI. You need to POST to ../status to change it.
Here's some code that flips the status bit for a Strimzi KafkaTopic (my use case).
iso8601=`date -u +"%Y-%m-%dT%H:%M:%SZ"`
read -r -d '' PATCH <<EOF
{
"status": {
"conditions": [
{
"status": "True",
"type": "Ready",
"lastTransitionTime": "$iso8601"
}
]
}
}
EOF
curl -k -XPATCH \
-H "Accept: application/json" \
-H "Content-Type: application/merge-patch+json" \
-H "User-Agent: $AGENT_STRING" \
-H "Authorization: Bearer $TOKEN" \
-d "$PATCH" \
"$APISERVER/apis/kafka.strimzi.io/v1beta2/namespaces/$NAMESPACE/kafkatopics/$TOPIC/status"
If someone could help me figure out how to use the libraries available in kopf to do this in my controller, I'll be happy to code up a permanent solution in kopf.
I have it running. The function is below. This is all that I needed, but I'm happy to help discuss and improve the library.
customobj_api = k8s_client.CustomObjectsApi()
...
def ready_status(name, namespace, plural_type):
iso8601utc = datetime.datetime.utcnow().isoformat()
customobj_api.patch_namespaced_custom_object_status(
name=name,
group='kafka.strimzi.io',
version='v1beta2',
namespace=namespace,
plural=plural_type,
body= {
"status": {
"conditions": [
{
"status": "True",
"type": "Ready",
"lastTransitionTime": iso8601utc
}
]
}
}
)