pipecd icon indicating copy to clipboard operation
pipecd copied to clipboard

Piped CrashLoopBackOff

Open tetsuya28 opened this issue 4 years ago • 3 comments

Hi, dear.

What happened: Piped got CrashLoopBackOff I was doing quickstart, and when I deploy piped with following command with my config such as FORKED_REPO_URL , YOUR_PIPED_SECRET_KEY

helm -n pipecd install piped ./manifests/piped \
  --values ./quickstart/piped-values.yaml \
  --set secret.pipedKey.data=YOUR_PIPED_SECRET_KEY

Then, I got error in the piped pod

all drift detectors of 1 providers have been stopped
waiting for stopping all planners and schedulers
application store has been stopped
stats reporter has been stopped
flush all logs before stopping
flushing all of 0 stage persisters
stopping admin server
event store has been stopped
deployment store has been stopped
cloning examples for the first time    {"repo-id": "examples", "remote": "https://github.com/tetsuya28/examples.git", "repo-cache-path": "/tmp/gitcache170297730/examples"}
no app state of kubernetes application 5d733386-f5ea-4f70-9e52-d4ff336a3dc4 to report    {"cloud-provider": "kubernetes-default"}
no app state of kubernetes application 89cb6fab-fbca-4b4c-811b-ecbda3140d60 to report    {"cloud-provider": "kubernetes-default"}
no app state of kubernetes application ded58a83-6d14-4180-90dc-4749fd334abc to report    {"cloud-provider": "kubernetes-default"}
app live state reporter has been stopped    {"cloud-provider": "kubernetes-default"}
command was failed 1 times, sleep 1s before retrying command    {"repo-id": "examples", "remote": "https://github.com/tetsuya28/examples.git", "repo-cache-path": "/tmp/gitcache170297730/examples"}
all live state reporters of 1 providers have been stopped
log persister has been stopped
controller has been stopped
command store has been stopped
command was failed 2 times, sleep 1s before retrying command    {"repo-id": "examples", "remote": "https://github.com/tetsuya28/examples.git", "repo-cache-path": "/tmp/gitcache170297730/examples"}
command was failed 3 times, sleep 1s before retrying command    {"repo-id": "examples", "remote": "https://github.com/tetsuya28/examples.git", "repo-cache-path": "/tmp/gitcache170297730/examples"}
failed to clone from remote    {"repo-id": "examples", "remote": "https://github.com/tetsuya28/examples.git", "repo-cache-path": "/tmp/gitcache170297730/examples", "out": "", "error": "context canceled"}
github.com/pipe-cd/pipe/pkg/git.(*client).Clone
    pkg/git/client.go:114
github.com/pipe-cd/pipe/pkg/app/piped/trigger.(*Trigger).Run
    pkg/app/piped/trigger/trigger.go:124
github.com/pipe-cd/pipe/pkg/app/piped/cmd/piped.(*piped).run.func15
    pkg/app/piped/cmd/piped/piped.go:337
golang.org/x/sync/errgroup.(*Group).Go.func1

I can git clone on the k8s node that run piped pod.

What you expected to happen:

How to reproduce it:

Environment:

  • piped version: v0.9.5-63-g60d25d7
  • control-plane version:
  • Others:
    • Kubernetes Cluster built by kubeadm with calico
▶  kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T21:51:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

sincerely.

tetsuya28 avatar Feb 09 '21 16:02 tetsuya28

@tetsuya28 Hello. Thank you for your issue. Let me check this.

nghialv avatar Feb 10 '21 04:02 nghialv

@tetsuya28 Hello. Sorry for my late response. (I had a long vacation. 🏂 )

all drift detectors of 1 providers have been stopped
waiting for stopping all planners and schedulers
application store has been stopped
stats reporter has been stopped
flush all logs before stopping
flushing all of 0 stage persisters
stopping admin server
event store has been stopped
deployment store has been stopped

As the log messages, before entering into the failed to clone from remote error, looks like all components on piped were forced to be terminated. So I think maybe the piped pod was forcibly terminated. Do you have any information about why the pod was terminated? (kubectl describe pod would help)

nghialv avatar Feb 24 '21 11:02 nghialv

@tetsuya28 Hi there, various things changed since that version, could you make a try with the latest version of pipecd 🙏

khanhtc1202 avatar Oct 28 '21 03:10 khanhtc1202