patst
patst
@ywk253100 thanks for the advice. We managed to implement a `RestoreItemAction` which edits the pod spec and removes the init container, sidecar and envoy configuration volume during restore. (https://github.com/patst/velero-plugin-osm-prune )...
Would be interesting to have a trace log for the cases at https://github.com/openservicemesh/osm/blob/8fd236e8e104279b4d951a32720e06f4257fd80a/pkg/k8s/announcement_handlers.go#L71 when a conflict error is raised. Don't know if something else is updating Secret objects in parallel...
hi @alexweininger thanks for the fast answer. We are working in a very restricted environment and cannot access any resource without proxy. I did try out the new authentication system...
> > I'm now testing with an explicit deny for ssh-rsa in the ssh config of the repo server, the opposite of what is described here; https://argo-cd.readthedocs.io/en/stable/operator-manual/upgrading/2.1-2.2/#workaround > > ```...
https://devblogs.microsoft.com/devops/ssh-rsa-deprecation/ According to their schedule the brownout times and lengths will increase to 8 and 12 hours per day in the next week(s). That will be very annoying to sit...
We see that behaviour quite often as well using the grafana-provider, even without Pod restarts. In our observations that often happens when bursts of resources are synchronized (created). We see...
/reopen Still relevant for us
I stumbled across this as well. We have an own prometheus installation and install the linkerd-control-plane using helm. If the heartbeat job is enabled (default) all jobs fail because of...
We have the same issue. One observation to add which could point to your second idea: if we configure the `provider` in terraform with the attribute `database=""` the role deletion...