jaeger-operator
jaeger-operator copied to clipboard
Support TLS for spark dependencies
Spark dependencies job uses Java keystore for certificates. The docker image allows to configure java opts with ssl configuration - JAVA_OPTS=-Djavax.net.ssl.
.
The certs can be mounted via volume and volume mounts which are part of JaegerCommonSpec
. The issue that these certs have to be imported into java keystore/thruststore.
https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/ suggests using init container for the job.
Todos:
- [ ] allow configuring JAVA_OPTS to point to keystore/thruststore
- [ ] import certs into keystore/thruststore
Dependent issue in spark-dependencies https://github.com/jaegertracing/jaeger-operator/issues/294
@jpkrohling this is not ES specific. The dependency job supports multiple backends.
Hi @pavolloffay are there any updates on this?
There aren't ane news. If anybody has free cycles feel free to take it.
Spent a bit doing some digging through things and came across this page https://hub.helm.sh/charts/jaegertracing/jaeger
The page mentions that in order to have a version of the spark job working one would need to get the certificates from the elasticsearch cluster and do
$ keytool -import -trustcacerts -keystore trust.store -storepass <some pass> -alias es-root -file <es.pem>
$ kubectl create configmap jaeger-tls --from-file=trust.store --from-file=<es.pem>
And then use the jaeger-tls
configmap within the spark job definition as such:
spec:
jobTemplate:
spec:
template:
spec:
containers:
- name: jaeger-spark
args:
- --java.opts=-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=<some pass>
volumeMounts:
- name: jaeger-tls
mountPath: /tls
subPath:
readOnly: true
volumes:
- name: jaeger-tls
configMap:
name: jaeger-tls
...
Which all sounds like it is in accordance with the issue description.
Think it may be a bit better to define jaeger-tls
as a secret though, but other than that, is all that more or less correct?
Correct the certs have to be imported into java keystore in the spark image.
My idea was to implement a script in the spark image that would do that if the certs are specified as env vars.
Hey guys, is there any work around for this atm? I'm trying to follow the thread of TLS related changes and options. Is there any skip-tls JAVA_OPT or something that can be used temporarily?
To maybe help others: add the java options described above to the jaeger CR manually create a JKS trust store from the elasticsearch-es-http-certs-public secret downscale the operator replicaset to 0 edit the cronjob container template to mount a volume from the secret
This would probably a question to ask in the repository that holds the code for the spark dependencies processor. https://github.com/jaegertracing/spark-dependencies
Hi, maybe this is stupid question, but i couldn't run a spark job with following configuration:
spark:
enabled: true
cmdlineParams:
java.opts: "-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit"
extraConfigmapMounts:
- name: jaeger-tls
mountPath: /tls
subPath: ""
configMap: jaeger-tls
readOnly: true
Here is error:
/entrypoint.sh: 37: exec: --java.opts=-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit: not found
Can someone show me, what i'm doing wrong? or maybe there are any workaround, how to get spark job works. Thnx
@aleksandrovpa I don't have much time to look at this at the moment, but you may find what you need in these two items:
https://github.com/jaegertracing/jaeger-operator/issues/1332 https://github.com/jaegertracing/jaeger-operator/pull/1359
@aleksandrovpa we're just composing the cron job manually in order to bootstrap the trust store through an init container. You can check this out https://github.com/MS3Inc/tavros/blob/main/tests/integration/targets/playbooks/provision_playbook/example.com/platform/jaeger/default/cronjob-spark-dependencies.yaml#L39
Thanks @jorgex1 for your comment, it was very helpful :+1: :1st_place_medal:
is it currently the only solution to write our own job ? i see that the dependencies definition now allow to mount volumes but i am not sure how to create the java trustStore without an init container like the one in @jam01 link
I achieved the desired result (given you have pem certificate as a secret) by adding an alpine java init container to the spark dependencies pod, which will create me a truststore, which I can then mount to the spark container and provide the java opts means to it:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: jaeger-spark
spec:
schedule: "30 */12 * * *"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: create-jks-truststore
image: openjdk:11
volumeMounts:
- name: truststore-dir
mountPath: "/target"
- name: es-cert
mountPath: "/src"
readOnly: true
command:
- "/bin/sh"
- "-c"
- "rm -rf /target/* && keytool -import -file /src/cert.pem -storetype JKS -keystore /target/truststore.jks -storepass password -noprompt"
containers:
- name: jaeger-spark
image: jaegertracing/spark-dependencies:latest
env:
- name: STORAGE
value: "elasticsearch"
- name: ES_NODES
value: "https://elasticsearch.default:9200"
- name: ES_NODES_WAN_ONLY
value: "false"
- name: ES_USERNAME
value: user
- name: ES_PASSWORD
value: password
- name: JAVA_OPTS
value: "-Djavax.net.ssl.trustStore=/elasticsearch/truststore.jks -Djavax.net.ssl.trustStorePassword=password"
volumeMounts:
- name: truststore-dir
mountPath: "/elasticsearch"
- name: temp-dir
mountPath: "/tmp"
restartPolicy: OnFailure
volumes:
- name: truststore-dir
emptyDir: {}
- name: temp-dir
emptyDir: {}
- name: es-cert
secret:
secretName: my_secret_with_elasticsearch_certificate_pem
items:
- key: elasticsearch_certificate
path: "cert.pem"
Hope this helps someone.
Hi, maybe this is stupid question, but i couldn't run a spark job with following configuration:
spark: enabled: true cmdlineParams: java.opts: "-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit" extraConfigmapMounts: - name: jaeger-tls mountPath: /tls subPath: "" configMap: jaeger-tls readOnly: true
Here is error:
/entrypoint.sh: 37: exec: --java.opts=-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit: not found
Can someone show me, what i'm doing wrong? or maybe there are any workaround, how to get spark job works. Thnx
It is much simple: entrypoint.sh from docker image is a shell script. So it used JAVA_OPTS env to pass options to java process. You need to pass env variable JAVA_OPTS
by extraEnv
param from values.yaml
like this:
spark:
extraEnv:
- name: "JAVA_OPTS"
value: "-Djavax.net.ssl.trustStore=/tls/trust.store -Djavax.net.ssl.trustStorePassword=changeit"
The spark-dependencies container has a __cacert_entrypoint.sh
script in the /
root path which does the required work of taking any certificates found in the /certificates
path merges them with the system ca chain and then creates a JKS compatible one in the JRE cacerts location.
The current entrypoint script for the spark-dependencies container just needs wrapping or embedding that script in it and the problem will be solved moving forwards. Then the ability to specify the dependencies
in the storage
section of the CRD will be able to be fixed/corrected, and if you include the top level volume mounts defined for loading the TLS certificate for ElasticSearch/OpenSearch then that will clean up the fix and make it consistent.
The image tag I'm referring to is:
$ docker images --digests ghcr.io/jaegertracing/spark-dependencies/spark-dependencies
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
ghcr.io/jaegertracing/spark-dependencies/spark-dependencies latest sha256:a307e1baf5815682f7581f2d0f8734510e511c0ae4c326c8501bee2647f6c3dd 4a8f3ad09d74 5 days ago 442MB