mongo-k8s-sidecar
mongo-k8s-sidecar copied to clipboard
Permission error
Hi,
I get this error when trying the configuration below:
Error in workloop { [Error: [object Object]]
message:
{ kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message:
'pods is forbidden: User "system:serviceaccount:mongo:default" cannot list resource "pods" in API group "" in the namespace "mongo"',
reason: 'Forbidden',
details: { kind: 'pods' },
code: 403 },
statusCode: 403 }
Error in workloop { [Error: [object Object]]
message:
{ kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message:
'pods is forbidden: User "system:serviceaccount:mongo:default" cannot list resource "pods" in API group "" in the namespace "mongo"',
reason: 'Forbidden',
details: { kind: 'pods' },
code: 403 },
statusCode: 403 }
Configuration:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
namespace: mongo
spec:
serviceName: mongo
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: mongo
component: database
template:
metadata:
labels:
app: mongo
component: database
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: mongovol
persistentVolumeClaim:
claimName: mongovol
containers:
- name: mongo
image: mongo:4
args:
- "--replSet"
- "rsmongo"
- "--bind_ip_all"
ports:
- name: mongo # Open POD port
containerPort: 27017
protocol: TCP
volumeMounts:
- name: mongovol
mountPath: /data/db
subPath: fitnessdb
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secrets
key: mongo-password
envFrom:
- configMapRef:
name: mongo-config
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,component=database"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongo"
- name: KUBE_NAMESPACE
value: "mongo"
- name: MONGODB_USERNAME
valueFrom:
configMapKeyRef:
name: mongo-config
key: MONGO_INITDB_ROOT_USERNAME
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secrets
key: mongo-password
- name: MONGODB_DATABASE
value: admin
What could be the problem?
Did you forget to create a Service ? Like this : apiVersion: v1 kind: Service metadata: name: mongosrv labels: name: mongosrv spec: ports:
- port: 27017 targetPort: 27017 clusterIP: None selector: role: mongodb
Thanks a lot! Now it almost works. The "mongo" container fails, though, because of some storage related issue. I am working with an NFS-based PersistentVolume. Do you know what could be the issue here? Here are the logs:
{"t":{"$date":"2020-11-04T19:37:06.648+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-11-04T19:37:06.650+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-11-04T19:37:06.651+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-11-04T19:37:06.651+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-11-04T19:37:06.651+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongo-0"}}
{"t":{"$date":"2020-11-04T19:37:06.652+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-11-04T19:37:06.652+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2020-11-04T19:37:06.652+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"replication":{"replSet":"rsmongo"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2020-11-04T19:37:06.668+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=481M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2020-11-04T19:37:07.272+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1604518627:272072][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: File exists"}}
{"t":{"$date":"2020-11-04T19:37:07.273+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.36"}}
{"t":{"$date":"2020-11-04T19:37:07.273+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1604518627:273776][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2020-11-04T19:37:07.284+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1604518627:284350][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: File exists"}}
{"t":{"$date":"2020-11-04T19:37:07.285+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.37"}}
{"t":{"$date":"2020-11-04T19:37:07.286+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1604518627:286521][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2020-11-04T19:37:07.297+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1604518627:297050][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: File exists"}}
{"t":{"$date":"2020-11-04T19:37:07.298+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.38"}}
{"t":{"$date":"2020-11-04T19:37:07.299+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1604518627:299213][1:0x7f5914123a80], connection: __posix_open_file, 806: /data/db/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2020-11-04T19:37:07.301+00:00"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."}
{"t":{"$date":"2020-11-04T19:37:07.301+00:00"},"s":"F", "c":"STORAGE", "id":28595, "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"1: Operation not permitted"}}
{"t":{"$date":"2020-11-04T19:37:07.301+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":1101}}
{"t":{"$date":"2020-11-04T19:37:07.301+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
Never mind, it works now, it was an NFS issue. Thanks a lot for the great guide!
Fine ! But in my opinion using NFS for a MongoDB Volume is not good. You should use very fast (SSD ?) disk formatted with xfs. Here is my MongoDB volume configuration :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast-mongodb
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
fstype: "xfs"