Konstantin Nosov

Results 14 comments of Konstantin Nosov

> Maybe the number of levels to display could be an option in argocd-cm ? depending on use case displying of second level may be reuired or not, I beleve...

you may set absolute url with `page.edit_url` e.g in `on_page_context` handler of the plugin

I c. Looks like the only option I have is using instance of ClusterSecretStore per project with conditions[0].namespaces=[project namespace] Not all project specific resources (secret store) reside in project namespace...

Yep but `namespace` comments has " Ignored if referent is not cluster-scoped.". So in SecretStore only secret from same namespace may be referenced

`vip_internal` is just extra interface on k3s node VM, it has dedicated VLAN assigned. without `enable_service_security` everything works fine. I believe the issue is in iptables rules added by this...

seems to work (took wider cidr 10.0.0.0/8 , to avoid checking exact pod subnet ): ``` ubuntu@k3s-worker02:~$ sudo iptables-legacy -D INPUT -d 192.168.114.50 -s 10.0.0.0/8 -j ACCEPT iptables: Bad rule...

In my case neither VIP node nor pods it hosts can access the VIP. Difference with your setup is likely in node interfaces I have: - eth0: management interface configured...

@Cellebyte thx for your comment - with it I realized I have `flannel-backend: "host-gw"` in k3s config (as there is vxlan dedicated to flannel) and likely that is why there...

> just that it’s valid does not mean it works. my point was kube-vip is intended to work in different k8s setups, having flannel in host-gw is not a corner...

with enable_service_security no ip works without custom rules on iptables. I had to disabled it and add rules to firewall on router