argo-cd icon indicating copy to clipboard operation
argo-cd copied to clipboard

i/o timeout errors in redis and argocd-repo-server PODs

Open Cloud-Mak opened this issue 4 years ago • 79 comments

Hi All,

I exploring the argoCD. Its quite neat project. I have deployed argoCD on K8 1.17 cluster (1 master, 2 workers) running over 3 LXD containers. I could use other stuff like metallb, ingress, rancher etc fine with this cluster.

For some reason, my argoCD isn't working the expected way. I was abe to get argoCD UI login working by using bypass method in bug 4148 that I reported earlier.

Here are svcs in argoCD ns

NAME                            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/argocd-dex-server       ClusterIP   10.102.198.189   <none>        5556/TCP,5557/TCP,5558/TCP   25h
service/argocd-metrics          ClusterIP   10.104.80.68     <none>        8082/TCP                     25h
service/argocd-redis            ClusterIP   10.105.201.92    <none>        6379/TCP                     25h
service/argocd-repo-server      ClusterIP   10.98.76.94      <none>        8081/TCP,8084/TCP            25h
service/argocd-server           NodePort    10.101.169.46    <none>        80:32046/TCP,443:31275/TCP   25h
service/argocd-server-metrics   ClusterIP   10.107.61.179    <none>        8083/TCP                     25h

After I got my UI, I tried creating a new sample project from GUI, it failed. Below are the logs from during that time for argo-server

time="2020-08-27T09:22:21Z" level=info msg="received unary call /repository.RepositoryService/List" grpc.method=List grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content= grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:22:21Z" span.kind=server system=grpc
time="2020-08-27T09:22:21Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=List grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:22:21Z" grpc.time_ms=0.318 span.kind=server system=grpc
time="2020-08-27T09:22:21Z" level=info msg="finished unary call with code OK" grpc.code=OK grpc.method=List grpc.service=project.ProjectService grpc.start_time="2020-08-27T09:22:21Z" grpc.time_ms=3.441 span.kind=server system=grpc
time="2020-08-27T09:23:52Z" level=info msg="received unary call /repository.RepositoryService/ListApps" grpc.method=ListApps grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="repo:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" revision:\"HEAD\" " grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:23:52Z" span.kind=server system=grpc
_time="2020-08-27T09:26:39Z" level=warning msg="finished unary call with code Unavailable" error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.98.76.94:8081: i/o timeout\"" grpc.code=Unavailable grpc.method=ListApps grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:23:52Z" grpc.time_ms=167124.3 span.kind=server system=grpc_
time="2020-08-27T09:28:16Z" level=info msg="Alloc=10005 TotalAlloc=1978587 Sys=71760 NumGC=257 Goroutines=158"
time="2020-08-27T09:28:31Z" level=info msg="received unary call /repository.RepositoryService/GetAppDetails" grpc.method=GetAppDetails grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="source:<repoURL:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" path:\"y\" targetRevision:\"HEAD\" chart:\"\" > " grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:28:31Z" span.kind=server system=grpc
2020/08/27 09:28:48 proto: tag has too few fields: "-"
time="2020-08-27T09:28:48Z" level=info msg="received unary call /application.ApplicationService/Create" grpc.method=Create grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="application:<TypeMeta:<kind:\"\" apiVersion:\"\" > metadata:<name:\"app1\" generateName:\"\" namespace:\"\" selfLink:\"\" uid:\"\" resourceVersion:\"\" generation:0 creationTimestamp:<0001-01-01T00:00:00Z> clusterName:\"\" > spec:<source:<repoURL:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" path:\"yamls\" targetRevision:\"HEAD\" chart:\"\" > destination:<server:\"https://kubernetes.default.svc\" namespace:\"default\" > project:\"default\" > status:<sync:<status:\"\" comparedTo:<source:<repoURL:\"\" path:\"\" targetRevision:\"\" chart:\"\" > destination:<server:\"\" namespace:\"\" > > revision:\"\" > health:<status:\"\" message:\"\" > sourceType:\"\" summary:<> > > " grpc.service=application.ApplicationService grpc.start_time="2020-08-27T09:28:48Z" span.kind=server system=grpc
time="2020-08-27T09:31:11Z" level=info msg="received unary call /repository.RepositoryService/ListApps" grpc.method=ListApps grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="repo:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" revision:\"HEAD\" " grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:31:11Z" span.kind=server system=grpc
time="2020-08-27T09:31:11Z" level=info msg="received unary call /repository.RepositoryService/GetAppDetails" grpc.method=GetAppDetails grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="source:<repoURL:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" path:\"yamls\" targetRevision:\"HEAD\" chart:\"\" > " grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:31:11Z" span.kind=server system=grpc
time="2020-08-27T09:31:29Z" level=info msg="finished unary call with code InvalidArgument" error="rpc error: code = InvalidArgument desc = application spec is invalid: InvalidSpecError: Unable to get app details: rpc error: code = DeadlineExceeded desc = context deadline exceeded" grpc.code=InvalidArgument grpc.method=Create grpc.service=application.ApplicationService grpc.start_time="2020-08-27T09:28:48Z" grpc.time_ms=161011.11 span.kind=server system=grpc
time="2020-08-27T09:31:33Z" level=warning msg="finished unary call with code DeadlineExceeded" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" grpc.code=DeadlineExceeded grpc.method=GetAppDetails grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:28:31Z" grpc.time_ms=182001.84 span.kind=server system=grpc
time="2020-08-27T09:31:33Z" level=info msg="received unary call /repository.RepositoryService/GetAppDetails" grpc.method=GetAppDetails grpc.request.claims="{\"iat\":1598519577,\"iss\":\"argocd\",\"nbf\":1598519577,\"sub\":\"admin\"}" grpc.request.content="source:<repoURL:\"https://github.com/Cloud-Mak/Demo_ArgoCD.git\" path:\"y\" targetRevision:\"HEAD\" chart:\"\" > " grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:31:33Z" span.kind=server system=grpc
time="2020-08-27T09:33:31Z" level=warning msg="finished unary call with code Unavailable" error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.98.76.94:8081: i/o timeout\"" grpc.code=Unavailable grpc.method=ListApps grpc.service=repository.RepositoryService grpc.start_time="2020-08-27T09:31:11Z" grpc.time_ms=140004.14 span.kind=server system=grpc

I even tried creating app using the declarating way. Created this yaml and applied mainfest using kubctl apply -f method. This created a appp visible in GUI, but it was never deployed. The health status eventually became healthy, but the sync status remained unknown.

From GUI, I can see below errors under applications conditions one after another

ComparisonError
rpc error: code = DeadlineExceeded desc = context deadline exceeded

ComparisonError
rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.98.76.94:8081: i/o timeout"

While I tried deleting the app from GUI, it was stuck in deleting with below error visible under events in GUI

DeletionError
dial tcp 10.105.201.92:6379: i/o timeout
Unable to load data: dial tcp 10.105.201.92:6379: i/o timeout
Unable to delete application resources: dial tcp 10.105.201.92:6379: i/o timeout

As of now, nothing is working for my in argoCD. I am clueless as to what to do now.

Cloud-Mak avatar Aug 27 '20 09:08 Cloud-Mak

Unable to delete application resources: dial tcp 10.105.201.92:6379: i/o timeout

This is an indication that Argo CD cannot talk to the k8s API server and I think this may be environmental. Can you confirm the application controller is able to reach the managed cluster's API server?

jessesuen avatar Aug 27 '20 20:08 jessesuen

Can you confirm the application controller is able to reach the managed cluster's API server?

Hi, Thanks for the reply. Can you confirm how exactly I do that?

Cloud-Mak avatar Aug 31 '20 07:08 Cloud-Mak

Just asking method to verify coz- I could kube exec into app-conroller pod. It chooses non-root user - "argocd" for login in pod. Its debain buster container, where i can't install ping or ever sudo (to install ping). The plan was to ping kube api server IP (which is K8 master IP) to see if there is communication between two.

argocd@argocd-application-controller-d9d496bdc-hcv7t:~$ cat /etc/os-release 
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"

Cloud-Mak avatar Aug 31 '20 08:08 Cloud-Mak

hello, I have this same issue. How were you able to fix?

Leke-Ariyo avatar Mar 16 '21 06:03 Leke-Ariyo

i have this same issue, how were you able to fix?

XinCai avatar Apr 20 '21 23:04 XinCai

I saw the same issue in v2.0.1 as well, restarting all the pods fixed the issue but not sure what is the cause of it.

vikas027 avatar Apr 26 '21 07:04 vikas027

I'm facing the same issue.

I tried to delete ns argocd && kubectl apply ... many times, trying with versions 2.0.0, 2.0.1, 2.0.2 and 2.0.3.

The result is always the same. It seems that appication-controller cannot connect to redis:

time="2021-06-08T22:51:22Z" level=info msg="Processing all cluster shards"
time="2021-06-08T22:51:22Z" level=info msg="appResyncPeriod=3m0s"
time="2021-06-08T22:51:22Z" level=info msg="Application Controller (version: v2.0.2+9a7b0bc, built: 2021-05-20T19:30:25Z) starting (namespace: argocd)"
time="2021-06-08T22:51:22Z" level=info msg="Starting configmap/secret informers"
time="2021-06-08T22:51:22Z" level=info msg="Configmap/secret informer synced"
time="2021-06-08T22:51:22Z" level=info msg="Ignore status for CustomResourceDefinitions"
time="2021-06-08T22:51:22Z" level=info msg="0xc000186a20 subscribed to settings updates"
time="2021-06-08T22:51:22Z" level=info msg="Starting clusterSecretInformer informers"
time="2021-06-08T22:51:23Z" level=info msg="Notifying 1 settings subscribers: [0xc000186a20]"
time="2021-06-08T22:51:23Z" level=info msg="Ignore status for CustomResourceDefinitions"
time="2021-06-08T22:51:42Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"
time="2021-06-08T22:52:13Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"
time="2021-06-08T22:52:33Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"
time="2021-06-08T22:52:53Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"
time="2021-06-08T22:53:13Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"
time="2021-06-08T22:53:33Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.70.220:6379: i/o timeout"

Restarting containers does not help.

I can confirm that I installed just the provided manifests for the mentioned versions. Moreover argocd-redis:6379 is reachable from everywere in the cluster (given you provide ingress to it with a NetworkPolicy) and works fine.

The apps are stuck in an unknown state, although some of them become eventually healthy.

I'm totally clueless about the possible root cause.

Until this afternoon I had a working installation of v2.0.2, which I deployed few days ago as the last step of a migration path started from v1.5.

I noticed in the last few days (through prometheus/grafana) that the application-controller was requiring 3X RAM and CPU in comparison of what it used to require until I switched to v2. Initially I thought it could be a legitimate behavior possibly due to refactoring/new features. I eventually became suspicious when I noticed the longer refresh times for apps and looked into redis (which is deployed as an ephemeral container) and realized it was empty. Then i discovered the aforementioned logs in application-controller.

I then decided to deploy v2.0.3 hoping it could solve the issue, but from that point onward Argo definitely ceased to work correctly.

Please help. Thanks.

gtriggiano avatar Jun 08 '21 23:06 gtriggiano

All applications have this error shown in the UI Schermata 2021-06-09 alle 01 35 53

I also tried to set --repo-server-timeout-seconds to values like 420 or 600, but had no success.

gtriggiano avatar Jun 08 '21 23:06 gtriggiano

I had the same errors as @gtriggiano I replaced the image tag for the redis deployment to 6.2.4 in the helm chart, Note: without the alpine extension, and those errors disappeared.

redis:
   image:
     tag: '6.2.4'

jrhoward avatar Jun 09 '21 09:06 jrhoward

Thanks for the hint @jrhoward, but unfortunately this did not solve the issue for me 😞

It seems that no one of the Argo services can talk with redis.

argocd-server has logs like:

time="2021-06-09T09:42:15Z" level=warning msg="Failed to resync revoked tokens. retrying again in 1 minute: dial tcp 10.43.248.24:6379: i/o timeout"

These are the startup logs of argocd-application-controller:

time="2021-06-09T09:40:06Z" level=info msg="Processing all cluster shards"
time="2021-06-09T09:40:06Z" level=info msg="appResyncPeriod=3m0s"
time="2021-06-09T09:40:06Z" level=info msg="Application Controller (version: v2.0.3+8d2b13d, built: 2021-05-27T17:38:37Z) starting (namespace: argocd)"
time="2021-06-09T09:40:06Z" level=info msg="Starting configmap/secret informers"
time="2021-06-09T09:40:06Z" level=info msg="Configmap/secret informer synced"
time="2021-06-09T09:40:06Z" level=info msg="Ignore status for CustomResourceDefinitions"
time="2021-06-09T09:40:06Z" level=info msg="0xc00097b1a0 subscribed to settings updates"
time="2021-06-09T09:40:06Z" level=info msg="Refreshing app status (normal refresh requested), level (2)" application=drone
time="2021-06-09T09:40:06Z" level=info msg="Starting clusterSecretInformer informers"
time="2021-06-09T09:40:06Z" level=info msg="Ignore status for CustomResourceDefinitions"
time="2021-06-09T09:40:06Z" level=info msg="Comparing app state (cluster: https://kubernetes.default.svc, namespace: drone)" application=drone
time="2021-06-09T09:40:06Z" level=info msg="Start syncing cluster" server="https://kubernetes.default.svc"
W0609 09:40:06.719086       1 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0609 09:40:06.815954       1 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
time="2021-06-09T09:40:26Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:40:56Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:41:16Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:41:36Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:41:56Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:42:16Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
time="2021-06-09T09:42:37Z" level=warning msg="Failed to save clusters info: dial tcp 10.43.248.24:6379: i/o timeout"
...

It seems also that services cannot talk to each other. I found this in argocd-server logs:

time="2021-06-09T08:58:15Z" level=warning msg="finished unary call with code Unavailable" error="rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 10.43.55.98:8081: connect: connection refused\"" grpc.code=Unavailable grpc.method=GetAppDetails grpc.service=repository.RepositoryService grpc.start_time="2021-06-09T08:58:13Z" grpc.time_ms=2007.533 span.kind=server system=grpc

where 10.43.55.98 is the ClusterIP of the argocd-repo-server service.

I'm very puzzled.

gtriggiano avatar Jun 09 '21 09:06 gtriggiano

I spoke too soon. The errors are back

jrhoward avatar Jun 09 '21 22:06 jrhoward

Ok on delving deeper into my issue it was actually an SDN issue. I'm running on bare metal. Machines could not reach CoreDNS if they were not on the same machine, if they were on the same machine they couldn't reach the Redis Server if it was on another machine, so a mixture of DNS lookup failures and network connectivity to Redis

jrhoward avatar Jun 13 '21 01:06 jrhoward

In my case DNS was working fine and I was able to ping Redis master, and I'm also running k8s on bare metal with Kubespray.

I had no luck restarting the pods either and I had ArgoCD managed with ArgoCD itself, so I decided to follow the following procedure (read carefully before doing anything):

  1. Read very carefully https://argoproj.github.io/argo-cd/operator-manual/disaster_recovery/ and create a backup of ArgoCD (just in case)
  2. Delete ArgoCD statefulsets and deployments with: kubectl -n argocd delete deployments,statefulsets --all
  3. Recreated the ArgoCD missing resources using the GitOps repo of the cluster
  4. I restored ArgoCD status using the procedure described at point 1.

That brought back ArgoCD and the applications were left intact in my cluster.

Regarding the error Failed to save clusters info, I have no clue about what the cause could be.

irizzant avatar Jun 28 '21 11:06 irizzant

I got the same error happened at v2.0.x too. And tried several versions all got same error. Finally I remove all of the NetworkPolicy then all become working without any error. I just guess that NetworkPolicy restrict network traffic by the podSelector, but controller and server connect redis by the service IP:6379 port.

rexhsu1968 avatar Aug 02 '21 10:08 rexhsu1968

I've experienced a similar issue, where removing the NetworkPolicy for redis temporarily restored the connectivity. I restored the NetworkPolicy, then restarted the CNI agents on nodes running redis and argocd-server (cilium in my case), and connectivity was restored. I'd be cautious before restarting the cni agents. There was a blip in service communication (as expected). Proceed with caution.

erkerb4 avatar Aug 03 '21 14:08 erkerb4

I had been struggling with similar problems this week getting ArgoCD working in a small Vagrant/VirtualBox environment. I switched from Flannel to Calico and everything just magically started working.

jmaaks avatar Aug 12 '21 23:08 jmaaks

Yep, same here. Went from K3s Flannel to Calico and all issues are gone.

IgorGee avatar Aug 15 '21 03:08 IgorGee

I've same issue even if I have no NetworkPolicy running into cluster.I'm running argo in a minikube k8s'installation.

matteodamico avatar Aug 26 '21 15:08 matteodamico

I had the same issue and the cause of the problem was network policies

logileifs avatar Sep 23 '21 17:09 logileifs

I'm on a k3s ARM64 cluster. I'v got the same error.

Like @jrhoward i change Redis image :

redis:
   image:
     tag: '6.2.4'

Connexion to the cluster works fine now.

nlamirault avatar Sep 29 '21 06:09 nlamirault

Hi, i am also facing the same problem, any solution yet ?

azamafzaal avatar Dec 14 '21 15:12 azamafzaal

Given there were so many different issues I don't believe the problem is with ArgoCD.

jrhoward avatar Dec 14 '21 23:12 jrhoward

Hi, I am also facing the same problem. Any new workarounds?

suseendare avatar Jan 16 '22 05:01 suseendare

i would like to add regarding argocd - i have installed argocd with helm, this issue accured when i missused the vaules file with this section:

server:
  extraArgs:
    - --insecure

after removing it, all my problems were gone. thats super weird and not possible to findout based on the redis error message. hope this helps for some of you

gimpiron avatar Feb 03 '22 09:02 gimpiron

Removing Network Policies helped in my case. I'm using Weave with no other Policies, AgroCD ones were the only NPs there.

sebandgo avatar Feb 26 '22 09:02 sebandgo

In my case the problem was that terraform was overriding the default AWS EKS security groups (allow all), and so the server pod couldn't communicate with the redis pod. When I added the correct security groups everything started to work as expected.

To help yourself diagnose this problem use "kubectl logs pod/argocd-server...". Using that I could see that the server pod was timing out when trying to connect to the redis pod, and that helped me narrow it down.

IT-Luka avatar Mar 14 '22 21:03 IT-Luka

We don't use Network Policies (yet) but this error occurs in our installation and we haven't installed ArgoCD with helm (just simple kubectl apply ...) Any idea how to fix this ?

przemolb avatar Mar 21 '22 17:03 przemolb

Our cluster is using Weavenet as it's CNI. I resolved the issue by deleting and reaplying Weavenet CRDs

delete

kubectl delete -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

apply

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

We recently reconfigured our local DNS and I believe that could have been the cause of the problem

EdwinWalela avatar Mar 31 '22 10:03 EdwinWalela

running into the same problem on a fresh cluster:

export KUBECONFIG=/etc/kubernetes/admin.conf create Ippool for argo :

apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: argcd-ipv6-ippool spec: allowedUses:

Workload
Tunnel
blockSize: 122
cidr: 2a00:a000:1002:12::/64
ipipMode: Never
nodeSelector: all()
vxlanMode: Never

kubectl create namespace argocd kubectl annotate namespace argocd "[cni.projectcalico.org/ipv6pools"='["argcd-ipv6-ippool"]]

then:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.3/manifests/install.yaml

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo ""

after this I could login into argocd but the cluster was not connected so I did:

install argocd cli export KUBECONFIG=/etc/kubernetes/admin.conf argocd --insecure login [argocd-server.argocd.svc.k8s01.rcs-networks.com:443]( argocd cluster add kubernetes-admin@kubernetes --in-cluster --upsert -y after this cluster is connected and shows version number and so on but if I try to add an application:

export KUBECONFIG=/etc/kubernetes/admin.conf ./argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path guestbook --dest-namespace default --dest-server https://kubernetes.default.svc/ --directory-recurse I get :

FATA[0060] rpc error: code = InvalidArgument desc = application spec for guestbook is invalid: InvalidSpecError: repository not accessible: rpc error: code = DeadlineExceeded desc = context deadline exceeded

repo ist reachable If I do:

kubectl -n argocd exec --stdin --tty argocd-repo-server-5569c7b657-2sj98 -- /bin/sh cd /tmp git clone https://github.com/argoproj/argocd-example-apps.git

it works

After deleting all networkpolicies from argocd namespace it was running fine...

ruben-herold avatar Apr 06 '22 09:04 ruben-herold

hi I did some testing with delete step by step all polices. The policy which resolves my problem was deleting the argocd-repo-server-network-policy

ruben-herold avatar Apr 06 '22 17:04 ruben-herold