Kubernetes-container-service-GitLab-sample icon indicating copy to clipboard operation
Kubernetes-container-service-GitLab-sample copied to clipboard

using postgres yaml I got pod 'CrashLoopBackOff'

Open charlie-charlie opened this issue 6 years ago • 9 comments

basically I adopted the yaml except small tweaks(I strongly do believe this isn't the cause). when using kubectl describe, the pod starts failed. Also, I tested other version postgres image, i got same error. did any one run into same issue? thanks

charlie-charlie avatar Dec 05 '18 14:12 charlie-charlie

Here is the describe pod output

Name:           dicro-postgresql-5b46dcbd8b-rvsd8
Namespace:      web-external
Node:           ip-10-2-2-104.ec2.internal/10.2.2.104
Start Time:     Wed, 05 Dec 2018 09:57:49 -0500
Labels:         app=dicro-postgres
                pod-template-hash=1602876846
                tier=dicro-postgreSQL
Annotations:    <none>
Status:         Running
IP:             10.2.2.109
Controlled By:  ReplicaSet/dicro-postgresql-5b46dcbd8b
Containers:
  dicro-postgresql:
    Container ID:   docker://3bad56087a3dd535e7fe24f2fa52f20408ae0ee86a110a31b5afb2b59dc2f066
    Image:          postgres:9.6.2-alpine
    Image ID:       docker-pullable://postgres@sha256:f88000211e3c682e7419ac6e6cbd3a7a4980b483ac416a3b5d5ee81d4f831cc9
    Port:           5432/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 05 Dec 2018 09:58:09 -0500
      Finished:     Wed, 05 Dec 2018 09:58:09 -0500
    Ready:          False
    Restart Count:  1
    Environment:
      POSTGRES_USER:      gitlab
      POSTGRES_DB:        gitlabhq_production
      POSTGRES_PASSWORD:  gitlab
    Mounts:
      /var/lib/postgresql/data from postgresql (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x8hbp (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  postgresql:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dicro-postgres-claim
    ReadOnly:   false
  default-token-x8hbp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x8hbp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                                 Message
  ----     ------                  ----               ----                                 -------
  Warning  FailedScheduling        38s (x3 over 39s)  default-scheduler                    pod has unbound PersistentVolumeClaims (repeated 4 times)
  Normal   Scheduled               36s                default-scheduler                    Successfully assigned dicro-postgresql-5b46dcbd8b-rvsd8 to ip-10-2-2-104.ec2.internal
  Normal   SuccessfulMountVolume   36s                kubelet, ip-10-2-2-104.ec2.internal  MountVolume.SetUp succeeded for volume "default-token-x8hbp"
  Warning  FailedAttachVolume      34s (x3 over 36s)  attachdetach-controller              AttachVolume.Attach failed for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426" : "Error attaching EBS volume \"vol-0d793cdeb8625d6af\"" to instance "i-07305ad491b7d9a9b" since volume is in "creating" state
  Normal   SuccessfulAttachVolume  28s                attachdetach-controller              AttachVolume.Attach succeeded for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426"
  Normal   SuccessfulMountVolume   17s                kubelet, ip-10-2-2-104.ec2.internal  MountVolume.SetUp succeeded for volume "pvc-205551ea-f89e-11e8-91a5-0e0920e8c426"
  Normal   Pulled                  16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Container image "postgres:9.6.2-alpine" already present on machine
  Normal   Created                 16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Created container
  Normal   Started                 16s (x2 over 17s)  kubelet, ip-10-2-2-104.ec2.internal  Started container
  Warning  BackOff                 14s (x2 over 15s)  kubelet, ip-10-2-2-104.ec2.internal  Back-off restarting failed container

charlie-charlie avatar Dec 05 '18 15:12 charlie-charlie

Have you solved the issue?

MohammedFadin avatar Jun 01 '19 20:06 MohammedFadin

same problem +1

cjen07 avatar Jun 12 '19 18:06 cjen07

same issue...any solutions so far?

pboehma avatar Jun 19 '19 11:06 pboehma

So I fixed mine because of a missing POSGRES_PASSWORD:

env:
  - name: POSTGRES_PASSWORD
    value: mysecretpassword

I found this out by typing

kubectl logs postgres-pod -p

and I got this:

Error: Database is uninitialized and superuser password is not specified.
       You must specify POSTGRES_PASSWORD for the superuser. Use
       "-e POSTGRES_PASSWORD=password" to set it in "docker run".

       You may also use POSTGRES_HOST_AUTH_METHOD=trust to allow all connections
       without a password. This is *not* recommended. See PostgreSQL
       documentation about "trust":
       https://www.postgresql.org/docs/current/auth-trust.html

Once I updated the yaml file, I deleted the pod and recreated it, and it started no problems Hope that helps

richardforth avatar Mar 02 '20 21:03 richardforth

I run the cmd below kubectl logs postgres-pod -p I got the same error as @richardforth said,

I added the user and password and with that I resolve the problem, below I left the example

apiVersion: v1
kind: Pod
metadata:
  name: postgres-pod
  labels:
    name: postgres-pod
    app: demo-voting-app
spec:
  containers:
  - name: postgres
    image: postgres:9.4
    env:
      - name: POSTGRES_USER
        value: admin
      - name: POSTGRES_PASSWORD
        value: admin
    ports:
     - containerPort: 5432

DAAC avatar Mar 31 '20 04:03 DAAC

@DAAC this worked

Devopsforum avatar Nov 11 '20 04:11 Devopsforum

apiVersion: v1 kind: Pod metadata: name: postgres-pod labels: name: postgres-pod app: demo-voting-app spec: containers:

  • name: postgres image: postgres:9.4 env:
    • name: POSTGRES_USER value: admin
    • name: POSTGRES_PASSWORD value: admin ports:
    • containerPort: 5432

neerajkr25 avatar Jun 09 '21 22:06 neerajkr25

I got the same problem. Then I added the POSTGRES_USER and POSTGRES_PASSWORD. The pod is up. However, when run the voting app and select cat/dog. The voting not reflect to the result page. Do I need to create the Azure Postgres SQL separately?

pdusita avatar Sep 29 '21 09:09 pdusita