kube-arangodb icon indicating copy to clipboard operation
kube-arangodb copied to clipboard

ArangoDB fails to start in Kubernetes on Docker for Windows

Open ugumba opened this issue 6 years ago • 3 comments

Hi! I'm trying to add ArangoDB to my local cluster (Docker for Windows 2.0.1.0 Edge with Kubernetes 1.13.0) using the Helm charts:

$URLPREFIX="https://github.com/arangodb/kube-arangodb/releases/download/0.3.7"
helm install $URLPREFIX/kube-arangodb-crd.tgz -n arango-crd
helm install $URLPREFIX/kube-arangodb.tgz --set=DeploymentReplication.Create=false -n arango-op
helm install $URLPREFIX/kube-arangodb-storage.tgz -n arango-storage
kubectl apply -f https://raw.githubusercontent.com/arangodb/kube-arangodb/master/examples/single-server-no-auth.yaml

The server pod starts up, and initializes local storage under .docker\Volumes in my host profile as expected. However, it immediately terminates - if I'm quick enough, I manage to capture this log:

2019-01-21T13:56:43Z [1] INFO ArangoDB 3.4.0 [linux] 64bit, using jemalloc, build tags/v3.4.0-0-g3a7df19189, VPack 0.1.33, RocksDB 5.16.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.1.0h  27 Mar 2018
2019-01-21T13:56:43Z [1] INFO detected operating system: Linux version 4.9.125-linuxkit (root@659b6d51c354) (gcc version 6.4.0 (Alpine 6.4.0) ) #1 SMP Fri Sep 7 08:20:28 UTC 2018
2019-01-21T13:56:43Z [1] INFO {authentication} Jwt secret not specified, generating...
2019-01-21T13:56:43Z [1] WARNING {startup} using default storage engine 'rocksdb', as no storage engine was explicitly selected via the `--server.storage-engine` option
2019-01-21T13:56:43Z [1] INFO {startup} please note that default storage engine has changed from 'mmfiles' to 'rocksdb' in ArangoDB 3.4
2019-01-21T13:56:43Z [1] INFO using storage engine rocksdb
2019-01-21T13:56:43Z [1] INFO {cluster} Starting up with role SINGLE
2019-01-21T13:56:43Z [1] INFO {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2019-01-21T13:56:43Z [1] WARNING {threads} --server.threads (64) is more than eight times the number of cores (2), this might overload the server
2019-01-21T13:56:44Z [1] INFO {authentication} Authentication is turned off, authentication for unix sockets is turned on
2019-01-21T13:56:46Z [1] INFO created base application directory '/var/lib/arangodb3-apps/_db'
2019-01-21T13:56:47Z [1] INFO using endpoint 'http+tcp://[::]:8529' for non-encrypted requests
2019-01-21T13:56:49Z [1] INFO {authentication} Creating user "root"
2019-01-21T13:56:49Z [1] INFO ArangoDB (version 3.4.0 [linux]) is ready for business. Have fun!
2019-01-21T13:56:51Z [1] INFO control-c received, beginning shut down sequence
2019-01-21T13:56:52Z [1] INFO ArangoDB has been shut down

Subsequent attempts by Kubernetes to revive the pod fails with:

2019-01-21T13:59:17Z [1] INFO ArangoDB 3.4.0 [linux] 64bit, using jemalloc, build tags/v3.4.0-0-g3a7df19189, VPack 0.1.33, RocksDB 5.16.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.1.0h  27 Mar 2018
2019-01-21T13:59:17Z [1] INFO detected operating system: Linux version 4.9.125-linuxkit (root@659b6d51c354) (gcc version 6.4.0 (Alpine 6.4.0) ) #1 SMP Fri Sep 7 08:20:28 UTC 2018
2019-01-21T13:59:17Z [1] INFO {authentication} Jwt secret not specified, generating...
2019-01-21T13:59:17Z [1] INFO using storage engine rocksdb
2019-01-21T13:59:17Z [1] INFO {cluster} Starting up with role SINGLE
2019-01-21T13:59:17Z [1] INFO {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2019-01-21T13:59:17Z [1] WARNING {threads} --server.threads (64) is more than eight times the number of cores (2), this might overload the server
2019-01-21T13:59:18Z [1] INFO {authentication} Authentication is turned off, authentication for unix sockets is turned on
2019-01-21T13:59:18Z [1] FATAL {startup} column family 'Documents' is missing in database. if you are upgrading from an earlier alpha or beta version of ArangoDB 3.2, it is required to restart with a new database directory and re-import data

Could I be doing something wrong - or am I too optimistic in hoping this should work in Docker for Windows?

ugumba avatar Jan 21 '19 14:01 ugumba

I just ran into this as well with the agents, though I'm using the kubectl manifests instead of helm and I'm using cluster mode

2019-06-14T22:19:07Z [1] INFO ArangoDB 3.4.6-1 [linux] 64bit, using jemalloc, build tags/v3.4.6.1-0-gd3c504bfc9, VPack 0.1.33, RocksDB 5.16.0, ICU 58.1, V8 5.7.492.77, OpenSSL 1.1.0j  20 Nov 2018
2019-06-14T22:19:07Z [1] INFO detected operating system: Linux version 4.9.125-linuxkit (root@659b6d51c354) (gcc version 6.4.0 (Alpine 6.4.0) ) #1 SMP Fri Sep 7 08:20:28 UTC 2018
2019-06-14T22:19:07Z [1] INFO using storage engine rocksdb
2019-06-14T22:19:07Z [1] INFO {cluster} Starting up with role AGENT
2019-06-14T22:19:07Z [1] INFO {syscall} file-descriptors (nofiles) hard limit is 1048576, soft limit is 1048576
2019-06-14T22:19:07Z [1] WARNING {threads} --server.threads (64) is more than eight times the number of cores (4), this might overload the server
2019-06-14T22:19:07Z [1] INFO {authentication} Authentication is turned on (system only), authentication for unix sockets is turned on
2019-06-14T22:19:07Z [1] FATAL {startup} column family 'Documents' is missing in database. if you are upgrading from an earlier alpha or beta version of ArangoDB 3.2, it is required to restart with a new database directory and re-import data

jaredpetersen avatar Jun 17 '19 15:06 jaredpetersen

just tried to start latest image from repository on windows 10 using linux containers - getting the same problem.

looks like its expect a formatted db model with default 'Documents' column family inside

docker command as follows

Docker run -e ARANGO_NO_AUTH=1 -p 8529:8529 -d -v D:\virtual-machines\arangodb:/var/lib/arangodb3 --name arangodb-instance arangodb

generates following

..FATAL {startup} column family 'Documents' is missing in database. if you are upgrading from an earlier alpha or beta version of ArangoDB 3.2, it is required to restart with a new database directory and re-import data

woodmawa avatar Aug 06 '19 15:08 woodmawa

I have tried that again without the volume mapping to my local d drive - and it starts using default storage. unfortunately i want to not store the DB on the default windows c drive. how do you create a valid empty database on 'another drive mapping' and then reference that valid empty db model ? I can play on this container for very small datasets - but for larger i need to tell docker to run a mapped storage from my D drive. how do you create a valid db ahead to reference ?

woodmawa avatar Aug 06 '19 16:08 woodmawa