dragonfly
dragonfly copied to clipboard
Pod not running on K8s on arm Hardware with all default settings
Describe the bug I installed dragonfly with the following command from your Webpage: helm upgrade -n dragonfly --install dragonfly oci://ghcr.io/dragonflydb/dragonfly/helm/dragonfly --version v1.13.0 (https:///dragonfly-6d7www.dragonflydb.io/docs/getting-started/kubernetes#more-customization)
I get the following shown by kubectl -n dragonfly get all
NAME READY STATUS RESTARTS AGE
pod/dragonfly-6d7d5b6bb4-rszth 0/1 CrashLoopBackOff 5 (2m30s ago) 5m58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dragonfly ClusterIP 10.10.7.208 <none> 6379/TCP 6m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dragonfly 0/1 1 0 5m59s
NAME DESIRED CURRENT READY AGE
replicaset.apps/dragonfly-6d7d5b6 0/1
And also the command kubectl -n dragonfly logs (-p) dragonfly-6d7d5b6bb4-rszth returns nothing.
To Reproduce Execute helm upgrade -n dragonfly --install dragonfly oci://ghcr.io/dragonflydb/dragonfly/helm/dragonfly --version v1.13.0 on your arm cluster.
Expected behavior Drogonfly Pod running.
Environment (please complete the following information):
- OS: [Gentoo]
- Kernel: # Command:
6.10
- Containerized?: [Kubernetes]
- Dragonfly Version: [1.13.0]
- Processorfamily: [arm64]
helm output (--dry-run)
Pulled: ghcr.io/dragonflydb/dragonfly/helm/dragonfly:v1.13.0
Digest: sha256:d0f41384042b08c2690f500f76d31d269379bda0055c4e3ea57ee978e90c9c7d
NAME: dragonfly
LAST DEPLOYED: Thu Jan 25 18:12:28 2024
NAMESPACE: dragonfly
STATUS: pending-install
REVISION: 1
TEST SUITE: None
HOOKS:
MANIFEST:
---
# Source: dragonfly/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dragonfly
namespace: dragonfly
labels:
helm.sh/chart: dragonfly-v1.13.0
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
app.kubernetes.io/version: "v1.13.0"
app.kubernetes.io/managed-by: Helm
---
# Source: dragonfly/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: dragonfly
namespace: dragonfly
labels:
helm.sh/chart: dragonfly-v1.13.0
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
app.kubernetes.io/version: "v1.13.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 6379
targetPort: dragonfly
protocol: TCP
name: dragonfly
selector:
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
---
# Source: dragonfly/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dragonfly
namespace: dragonfly
labels:
helm.sh/chart: dragonfly-v1.13.0
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
app.kubernetes.io/version: "v1.13.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
spec:
serviceAccountName: dragonfly
containers:
- name: dragonfly
image: "docker.dragonflydb.io/dragonflydb/dragonfly:v1.13.0"
imagePullPolicy: IfNotPresent
ports:
- name: dragonfly
containerPort: 6379
protocol: TCP
livenessProbe:
exec:
command:
- /bin/sh
- /usr/local/bin/healthcheck.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
exec:
command:
- /bin/sh
- /usr/local/bin/healthcheck.sh
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
args:
- "--alsologtostderr"
resources:
limits: {}
requests: {}
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace dragonfly -l "app.kubernetes.io/name=dragonfly,app.kubernetes.io/instance=dragonfly" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace dragonfly $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "You can use redis-cli to connect against localhost:6379"
kubectl --namespace dragonfly port-forward $POD_NAME 6379:$CONTAINER_PORT
The same issue resides when using the operator or a default values.yaml.
@Pothulapati can you pls take a look?
I'm able to get the operator (dragonflydb/operator:v1.1.2
) to run on ARM64 but not the cluster (dragonflydb/dragonfly:v1.17.1
).
---
# yaml-language-server: $schema=https://kubernetes-schemas.pages.dev/dragonflydb.io/dragonfly_v1alpha1.json
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
name: dragonfly
spec:
image: ghcr.io/dragonflydb/dragonfly:v1.17.1
replicas: 3
env:
- name: MAX_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
args:
- --maxmemory=$(MAX_MEMORY)Mi
- --proactor_threads=2
- --cluster_mode=emulated
- --lock_on_hashtags
resources:
requests:
cpu: 100m
limits:
memory: 512Mi
dragonfly-0
status:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 132