docker-elk
docker-elk copied to clipboard
Persist elasticsearch.keystore or add data before the first container starts?
Hi, ist there anyway to persist the elasticsearch.keystore
or add things to the keystore before the container starts elasticsearch?
My problem is, i need to add a client.secret
before the container starts the first time and also want to persist this data after a docker-compose stop
.
- I tried:
docker-compose run elasticsearch bin/elasticsearch-keystore add xpack.security.authc.realms.oidc.oidc1.rp.client_secret
But data is gone on the next startup.
- I tried to add the following to the
docker-compose.yml
elasticsearch:
...
volumes:
...
- type: bind
source: ./elasticsearch/config/elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
Also does not work, because keystore must exist before the startup. Creating an empty file before does not help, because its the wrong format.
I think the easiest way would be
- Initialize the keystore using Docker, without Compose:
$ docker run --rm -v $(pwd)/elasticsearch/config/:/usr/share/elasticsearch/config/ docker.elastic.co/elasticsearch/elasticsearch:7.11.1 elasticsearch-keystore create
- Mount the pre-created keystore via the Compose file using a bind mount, like you tried to do.
That's a fair point, we should persist the keystore by default. That would be quite easy to do if we could define its location, but apparently that's not possible. Maybe I'm wrong?
@cebor were you successful using the approach I suggested?
No, then the build does not work anymore.
What i do now is, i wrote a routine which restores the keystore on every startup. ^^
Thx anyway.
edit: Does not work. Please ignore. See https://github.com/deviantony/docker-elk/issues/579#issuecomment-926618565.
@cebor it works fine for me but I noticed some java.nio.file.FileSystemException
errors in Docker for Windows, if that's what you use.
I'll write a more detailed procedure in the README but here is what I did, step by step:
1. Create keystore
$ docker run --rm -v $(pwd)/elasticsearch/config/:/usr/share/elasticsearch/config/ docker.elastic.co/elasticsearch/elasticsearch:7.11.2 elasticsearch-keystore create
Created elasticsearch keystore in /usr/share/elasticsearch/config/elasticsearch.keystore
2. Change ownership
The created keystore is owned by root
, unless you run the previous command with -u <user>
, which I didn't.
We need to change its owner to 1000
, which is the uid of the elasticsearch
user inside the Elasticsearch image.
$ ls -l elasticsearch/config/elasticsearch.keystore
-rw-rw---- 1 root root 199 Mar 27 18:09 elasticsearch/config/elasticsearch.keystore
$ sudo chown -v 1000 elasticsearch/config/elasticsearch.keystore
changed ownership of 'elasticsearch/config/elasticsearch.keystore' from root to 1000
3. Mount the keystore in the expected location
$ git diff
diff --git a/docker-compose.yml b/docker-compose.yml
index 669e337..a9ee0ea 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -11,6 +11,10 @@ services:
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
+ - type: bind
+ source: ./elasticsearch/config/elasticsearch.keystore
+ target: /usr/share/elasticsearch/config/elasticsearch.keystore
+ read_only: false
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
4. Run
$ docker-compose up elasticsearch
elasticsearch_1 | {"type": "server", "timestamp": "2021-03-27T17:40:17,308Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "51691320c358", "message": "version[7.11.2], pid[7], build[default/docker/3e5a16cfec50876d20ea77b075070932c6464c7d/2021-03-06T05:54:38.141101Z], OS[Linux/4.19.128-microsoft-standard/amd64], JVM[AdoptOpenJDK/OpenJDK 64-Bit Server VM/15.0.1/15.0.1+9]" }
...
$ curl -D- localhost:9200 -u elastic:changeme
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 542
{
"name" : "51691320c358",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "ZUQ-rbdUR_20YCN6Qe4fOA",
"version" : {
"number" : "7.11.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "3e5a16cfec50876d20ea77b075070932c6464c7d",
"build_date" : "2021-03-06T05:54:38.141101Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
@antoineco, i tried your workaround (create a the keystore with uid 1000 and mount/bind it as a read-write volume) but i get also a java.nio.file.FileSystemException
error on Linux, was your error the same ?
$ docker-compose up
[...]
Creating elasticsearch ... done
Attaching to elasticsearch
elasticsearch | Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy
elasticsearch | at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:100)
elasticsearch | at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)
elasticsearch | at java.base/sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:415)
elasticsearch | at java.base/sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:267)
elasticsearch | at java.base/java.nio.file.Files.move(Files.java:1426)
elasticsearch | at org.elasticsearch.common.settings.KeyStoreWrapper.save(KeyStoreWrapper.java:523)
elasticsearch | at org.elasticsearch.common.settings.AddStringKeyStoreCommand.executeCommand(AddStringKeyStoreCommand.java:101)
elasticsearch | at org.elasticsearch.common.settings.BaseKeyStoreCommand.execute(BaseKeyStoreCommand.java:57)
elasticsearch | at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75)
elasticsearch | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
elasticsearch | at org.elasticsearch.cli.MultiCommand.execute(MultiCommand.java:80)
elasticsearch | at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:116)
elasticsearch | at org.elasticsearch.cli.Command.main(Command.java:79)
elasticsearch | at org.elasticsearch.common.settings.KeyStoreCli.main(KeyStoreCli.java:32)
elasticsearch exited with code 1
@jeansalama yes I had the same error. I was convinced it was due to the way Docker for Desktop shares files across WSL instances, but you're saying it happens also on a regular Linux host?
edit: ugh, you're right. When a single file is mounted, it can be edited in place but not re-created because this would change its inode. I suspect this is exactly what's happening here. I'm not sure how it worked the first time but I was probably not mounting the keystore where I expected. Let me try that again, this time by mounting the entire config directory.
Yep, and i have just found a warning about it in elastic official documentation and how to do it
So i finally choose to do like @cebor , ovverriding the entrypoint for elasticsearch docker image on startup. It is much simplier than copying all orginal config files, then modify elasticsearch.yml
and the keystore to commit them all :/
No, then the build does not work anymore.
What i do now is, i wrote a routine which restores the keystore on every startup. ^^
Thx anyway.
@jeansalama can you share the file from which you retained the keystore on startup? I am very new to ELk and docker. created the keystore as per given in the ELK documentation. https://www.elastic.co/guide/en/elasticsearch/reference/8.4/docker.html#docker-keystore-bind-mount. Attached the keystore password as environment variables with docker-compose.yml file.
volumes:
- ./elasticsearch/config/elasticsearch.keystore:/usr/share/elasticsearch/config/elasticsearch.keystore:ro,z
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,z
- elasticsearch:/usr/share/elasticsearch/data:z
# (!) TLS certificates. Generate using instructions from tls/README.md.
- ./tls/elasticsearch/elasticsearch.p12:/usr/share/elasticsearch/config/elasticsearch.p12:ro,z
- ./tls/elasticsearch/http.p12:/usr/share/elasticsearch/config/http.p12:ro,z
ports:
- "9200:9200"
- "9300:9300"
environment:
KEYSTORE_PASSWORD: ${KEYSTORE_PASSWORD}
ES_JAVA_OPTS: -Xms512m -Xmx512m
but still facing the issues you faced above
docker-elk-elasticsearch-1 | Enter password for the elasticsearch keystore : Enter password for the elasticsearch keystore : Exception in thread "main" java.nio.file.FileSystemException: /usr/share/elasticsearch/config/elasticsearch.keystore.tmp -> /usr/share/elasticsearch/config/elasticsearch.keystore: Device or resource busy`
I cannot solve the problem even after going through your conversation with @antoineco .
@LeoNaveen10 You could inject some initialization logic to copy the keystore from a mounted directory using the following technique:
#docker-compose.yml
version: 3.7
services:
elasticsearch:
volumes:
# ...
- ./keystore/my.keystore:/my/mounted/directory/my.keystore:ro,z
entrypoint:
- /bin/tini
- --
- bash
- -c
- >
cp /my/mounted/directory/my.keystore /usr/share/elasticsearch/config/elasticsearch.keystore;
exec /usr/local/bin/docker-entrypoint.sh;
# ...
[disclaimer - I'm completely new to docker and ELK]
Hi there, facing the same issue as LeoNaveen10 and jeansalama could anyone provide a working solution or starting point that would allow storing secrets in the elasticsearch keystore ?
To sum up i've tried the offical method to generate keystore containing my secrets and then bind the file as a volume but it won't work, later on, i've found that mounting the file directly to /usr/share/elasticsearch/config/elasticsearch.keystore would cause issues and that it would be better to bind the whole config directory instead but can't manage to do that as i would overwrite other file generated and placed in that directory such as elasticsearch.yml, i'm a bit confused on how to proceed.
Thanks.
@antoineleguillou the message right above yours provides a ready to use solution☝️ Just replace the file name in the example with the name of your local keystore file.
You still need to create the keystore yourself, but instead of mounting it inside Elasticsearch's config directory, you mount it in another location of your choice (anywhere), then copy it to Elasticsearch's config directory right before calling Elasticsearch's startup script. With this approach, the keystore isn't mounted as a single-file volume, so Elasticsearch can manipulate it without complaining.
Just make sure the keystore is group-writable before starting Elasticsearch (chmod g+wr my.keystore
) otherwise Elasticsearch might not be able to overwrite it on start.
@antoineco thanks a lot for your message i'm sorry for my misunderstanding, could you enlighten me on the difference between copying a file and mounting a volume in that case ? Thanks a lot.
In theory there isn't much difference, but upon startup Elasticsearch does something unusual: instead of updating the keystore file in place, it creates a temporary version of it called elasticsearch.keystore.tmp
, then tries to move it to elasticsearch.keystore
.
Translated to filesystem operations, this means that elasticsearch.keystore.tmp
and elasticsearch.keystore
are represented using different inodes, and when a single file is mounted, its inode cannot change.
Here are a series of commands you can execute anywhere on your Docker host to reproduce the behaviour:
Create a temporary file to be mounted inside a container:
$ touch inode_test
Display the inode of the mounted file inside a container:
$ docker container run --rm -v "$PWD"/inode_test:/inode_test alpine ls -i inode_test
5 inode_test
Try to move the mounted file from within a container:
$ docker container run --rm -v "$PWD"/inode_test:/inode_test alpine mv inode_test inode_test_new
mv: can't rename 'inode_test': Resource busy
From within a container, create a new file and try to overwrite the original mounted file with it:
$ docker container run --rm -v "$PWD"/inode_test:/inode_test alpine sh -c 'touch inode_test_new; ls -i inode_test*; mv inode_test_new inode_test'
5 inode_test
784 inode_test_new
mv: can't rename 'inode_test_new': Resource busy
Try to edit the mounted file in place from within the container:
$ docker container run --rm -v "$PWD"/inode_test:/inode_test alpine sh -c 'echo hello >inode_test'
(it works, in-place writes do not change the inode)
@antoineco is correct. To illustrate the problem further, here are some examples of block operations succeeding on the keystore prior to attempting the mv command.
These commands will be run in an ephemeral Docker container that does not have a running Elasticsearch process. Here's my setup leading into the examples:
# demonstrate that no containers are running
$ docker container ls
CONTAINER ID IMAGE COMMAND
# create a temporary container. copy the volume mounts from a production container that is stopped
$ docker run --rm --name=demo --volumes-from=my-prod-container -it --user 0 elasticsearch:8.5.3 /bin/bash
root@d742001961f2:/usr/share/elasticsearch# cd config
root@d742001961f2:/usr/share/elasticsearch/config#
The examples themselves:
# duplicate the keystore to a temporary file
root@d742001961f2:/usr/share/elasticsearch/config# cp elasticsearch.keystore elasticsearch.keystore.bak
# commands that use block copying to overwrite the original keystore will succeed
root@e63e906af8e0:/usr/share/elasticsearch/config# cp elasticsearch.keystore.bak elasticsearch.keystore
root@e63e906af8e0:/usr/share/elasticsearch/config# dd if=elasticsearch.keystore.bak of=elasticsearch.keystore
1+1 records in
1+1 records out
699 bytes copied, 9.7396e-05 s, 7.2 MB/s
# mv will fail because it doesn't attempt a block copy
root@d742001961f2:/usr/share/elasticsearch/config# mv elasticsearch.keystore.bak elasticsearch.keystore
mv: cannot move 'elasticsearch.keystore.bak' to 'elasticsearch.keystore': Device or resource busy
The solution isn't as simple as asking the Elasticsearch developers to always require a full block copy though. mv is normally preferred in this scenario to avoid read operations being attempted prior to the writes completing. By relinking the file in place with one that is already present on the same filesystem, the cutover from the new data to the old is an instant operation. Read operations are either hitting the old inode or the new inode, and the filesystem doesn't actually free the blocks associated with the old inode until all the running programs release their file handles.
This is a very long way of saying that the developers can't use something other than mv without creating different, potentially much bigger problems. Even if the elasticsearch-keystore command was over-engineered to see if the old file is currently in use (not always feasible), a read operation could still start in the middle of the block copy, resulting in seeks against a malformed/truncated file.
That said, I think some more helpful verbiage in the exception that nudges people toward https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_elasticsearch_keystore_device_or_resource_busy would still help steer users to the actual problem faster.