kubewatch
kubewatch copied to clipboard
Ability to get full resource information in event output
I'm not sure if this is feasible or not - or whether it's already supported and I've missed it ;)
From what I can can see, Event.Message only outputs a brief statement saying whether a particular resource has been updated.
What I would love is the full details of the resource that has been updated - for example, the full JSON of the resource (basically the results of kubectl get <resource> -o yaml)
My use-case is to monitor for particular resources being created, then inspect some attributes (labels/annotations etc), then perform some action. I'm thinking the most of this would be done in my own webhook, but I would rely on kubewatch to give me the JSON of the resource modified.
Let me know what you think - there's certainly other ways of achieving this (i.e. writing my own controller)
Thanks
@nabadger I don't see any reason why adding something like a verbose option would be out of the question for different resources. The next question of course is do we want everything from something like kubectl get po -o yaml or only a subset of that.
Once that's decided, it would likely come down to whether there is a common set of information that we would want to use between objects or if each object would have their own interesting pieces of information that we would want to pull. We would also probably need to wrestle with a good way to output this as it could likely become fairly large , which would clutter up any of the downstream systems receiving them (hipchat, slack, etc.)
Any updates on this request?? I have a similar requirement and would like to get entire json of the resource modified.
I did end up writing something similar to suit my needs, since capturing output is relatively easy these days via the python/go clients.
The fields I currently find of use are:
- Annotations (so i can extract build versions and/or pipeline build urls)
- Container (names / image versions), so we know which versions have been deployed
- Replica ready states (so we can determine whether pods are up/down/rolling out)
If this was only available for a webhook, you would probably just want to pass everything through.
For things like Slack, you could take a minimalist approach (i.e just saying which version of the container was deployed).
Thanks! I am looking to configure only webhook and would like to entire json. Can parse the json later. Were you able to edit the same project locally to suffice the requirement?
I didn't do it on this project unfortunately.
Hi! Any updates on this? Seems to be a great feature I really would like to have.
I've forked and implemented this and would like to merge my improvements back into Kubewatch. Is this project still maintained? I would love to discuss the best way to handle this before opening a PR. I'm currently modifying the existing webhook, but it is probably best to add a totally new handler so that I can change the format entirely without breaking backwards compatibility. For my use case, I would like to send events in the Cloudevents format.
To follow up, I have a working version of this running on a very large Kubernetes cluster in production. Is anyone else interested in this?
If anyone is still interested in this, you do it with an open source I wrote: https://docs.robusta.dev/master/
I did end up writing something similar to suit my needs, since capturing output is relatively easy these days via the python/go clients.
The fields I currently find of use are:
* Annotations (so i can extract build versions and/or pipeline build urls) * Container (names / image versions), so we know which versions have been deployed * Replica ready states (so we can determine whether pods are up/down/rolling out)If this was only available for a webhook, you would probably just want to pass everything through.
For things like Slack, you could take a minimalist approach (i.e just saying which version of the container was deployed).
Can you share your implementation?
@aantn I'm interested in this!
@ghostsquad happily! Going to continue the discussion here: https://github.com/robusta-dev/robusta/issues/213