Kamil Krzywicki

Results 20 comments of Kamil Krzywicki

There is now release 0.11.1 in dockerhub, without release/release notes.

Probably patch from here https://github.com/jurobystricky/Netgear-A6210/issues/24 fixed this issue for me.

In case kyverno is malfunctioning it will block all the leases updates. What will happen if some core components like kube-controller-manager or cloud-controller-manager are not able to aquire lease?

I switched to my own solution: https://github.com/camaeel/vault-k8s-helper/ For bank vaults I noticed that scaling first to 1 pod and then to 3 helped it to initialize properly.

Is there any estimated time when Longhorn 1.6 can be released?

@zimmertr I implemented a workaround for it using Mastercard's restapi provider: ``` resource "restapi_object" "cloud-controller-api-token" { object_id = var.cloud-controller-tokenid path = "/api2/json/access/users/${proxmox_virtual_environment_user.cloud-controller.user_id }/token/${var.cloud-controller-tokenid}" read_path = "/api2/json/access/users/${proxmox_virtual_environment_user.cloud-controller.user_id}/token/${var.cloud-controller-tokenid}" create_path = "/api2/json/access/users/${proxmox_virtual_environment_user.cloud-controller.user_id}/token/${var.cloud-controller-tokenid}" destroy_path...

It works fine when I remove all the groups from my OIDC except the one that is needed. So it is not the issue with `kubectl auth can-i `, rather...

Yes, all the groups where there.

You may give a try to https://github.com/dbschenker/node-undertaker/ to handle such cases.

I think cluster-autoscaler has one limitation - if nodes are in an ASG that has min=max then it won't terminate nodes that are not working properly. This case is addressed...