spark-operator icon indicating copy to clipboard operation
spark-operator copied to clipboard

Deal with internally created ConfigMap mounted onto SPARK_CONF_DIR

Open liyinan926 opened this issue 7 years ago • 46 comments

PR apache/spark#20669 introduced a ConfigMap carrying Spark configuration properties in a file for the driver. The environment variable SPARK_CONF_DIR is set to point to the mount path /opt/spark/conf of the ConfigMap. This is in conflict with what spec.sparkConfigMap is designed to do. We need to find a solution to work with the internally created and mounted ConfigMap.

liyinan926 avatar Jul 16 '18 21:07 liyinan926

@liyinan926 I was reading up on this topic, seems like one approach would be to leverage an init container that has access to both conf dirs, copying / merging files to the final destination (SPARK_CONF_DIR).

Does that sound like a viable solution? Given that Spark recently removed support for init containers, is this something we could accomplish with the webhook?

If you're not in a hurry, I'd like to try this one out (maybe after spark 2.4.0 comes out?) as a ramp-up issue.

aditanase avatar Sep 24 '18 08:09 aditanase

@aditanase sorry for not responding sooner. I think that's a viable approach. And yes the admission webhook can be used to inject an init-container for doing that. Thanks for being willing to take on this. Looking forward to your implementation!

liyinan926 avatar Oct 03 '18 22:10 liyinan926

No worries. What are your thoughts on https://github.com/apache/spark/pull/22146? Together with support for client mode ( https://github.com/apache/spark/commit/571a6f0574e50e53cea403624ec3795cd03aa204#diff-b5527f236b253e0d9f5db5164bdb43e9), it will be quite disruptive for this project. At best, it might negate the need for the webhook. At worst, it might make things like this issue vert hard to implement. I'm planning to spend some time on that PR this week, you should probably check it out if you didn't already.

aditanase avatar Oct 08 '18 20:10 aditanase

The pod template support will make the webhook obsolete for users using Spark 3.0 (assuming that's the version in which the PR will get into). For users who use Spark 2.4 and earlier, the webhook is still needed. So I wouldn't say it's disruptive, but it definitely requires some changes to the operator to make use of that.

liyinan926 avatar Oct 08 '18 20:10 liyinan926

@aditanase in the available timeline I don't see it disruptive - if webhook wont be needed for some users better it is. But to support earlier versions one would need webhook in the operator.

mrow4a avatar Oct 08 '18 21:10 mrow4a

Hey @aditanase @liyinan926 @mrow4a, wondering what's the plan to resolve this? One use case is to provide hive-site.xml in /opt/spark/conf so we can configure hive behavior, right now it seems I can not achieve that. Would love to hear your inputs.

wweic avatar Jan 12 '20 23:01 wweic

The approach proposed above of using an init-container to copy the files in sparkConfigMap to $SPARK_CONF_DIR of the main Spark container seems reasonable. This can be setup automatically by the operator.

liyinan926 avatar Jan 13 '20 02:01 liyinan926

Thanks @liyinan926 ! I'll dig a bit and see if I can help.

wweic avatar Jan 13 '20 05:01 wweic

I setup an init-container and mount spark generated Volume(based on a configmap) and the operator generated Volume(based on a configmap) to it.

kind: Pod
apiVersion: v1
spec:
  volumes:
    - name: spark-local-dir-1
      emptyDir: {}
    - name: spark-conf-volume
      configMap:
        name: spark-pi3-1580169127969-driver-conf-map
        defaultMode: 420
    - name: spark-configmap-volume
      configMap:
        name: wweic-test-xml
        defaultMode: 420
  initContainers:
    - name: init-spark-conf-dir
      image: 'wweic-spark-operator:latest'
      command:
        - /usr/bin/setup-spark-configmap.sh
      args:
        - /opt/spark/src-conf
        - /opt/spark/conf
      resources: {}
      volumeMounts:
        - name: spark-configmap-volume
          mountPath: /opt/spark/src-conf
        - name: spark-conf-volume
          mountPath: /opt/spark/conf
      imagePullPolicy: IfNotPresent

Then I do a cp to copy from operator config to spark config.


set -e
SCRIPT=`basename ${BASH_SOURCE[0]}`

echo "Setting up Spark Configmap for App"
SRC=$1
DST=$2
echo "Copy from ${SRC} to ${DST}"

echo "List src"
ls "${SRC}"

echo "List dst"
ls "${DST}"

touch "${SRC}"/helloworld
echo "List src"
ls "${SRC}"

cp "${SRC}"/* "${DST}"

It throws error:

cp: cannot create regular file in  '/opt/spark/conf/test.xml': Read-only file system

This is by design according to k8s(https://github.com/kubernetes/kubernetes/issues/62099, https://github.com/kubernetes/kubernetes/pull/58720/files). A volume created from ConfigMap will be read only. So it won't work out continuing this direction.

Another idea is to patch the spark generated ConfigMap with our ConfigMap(I'll need to figure out how), or we use subPath to mount each files from spark and operator user. What do you think? @liyinan926

wweic avatar Jan 28 '20 00:01 wweic

@liyinan926 To follow up on the solution, I propose we patch driver/executor Pod's ConfigMaps to mount files defined in sparkConfigMap in SparkApplicationSpec or ScheduledSparkApplicationSpec. Since both spark generated ConfigMaps and operator generated ConfigMaps will be mount to /opt/spark/conf simultaneously, we can use subPath to workaround volume mount limitation. To select which keys in sparkConfigMap to mount, we need to add new key sparkConfigMapKeys to specify the keys are available in the ConfigMap, then patch.go can add a volume mount for each key to the Pod. Does this sound good? I can send a PR for review.

wweic avatar Feb 08 '20 09:02 wweic

@wweic how about patching the internally created ConfigMap to append data stored in the user-specified ConfigMap (defined in sparkConfigMap)? We can easily find the internally created ConfigMap because it has an ownerReference pointing to the driver pod.

liyinan926 avatar Feb 10 '20 22:02 liyinan926

@liyinan926 Thanks for reply! I'm open to this option as well. Let me double check if we have access to full ConfigMap content in patch.go.

wweic avatar Feb 12 '20 03:02 wweic

@liyinan926 I looked at https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/3df703098970fbf7326ed4296470ea4c3688dec8/pkg/webhook/patch.go#L307-L326. We can patch the ConfigMap here.

But we need to call apiserver to get actual spec for 2 ConfigMaps for the following reasons:

  1. Spark generated ConfigMap. We need to get the full spec to patch with additional key-values from user provided ConfigMap. We currently only have the name in pod, I printed pod and only the configmap name is there: &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:spark-pi2-1581507753326-driver-conf-map,}
  2. User provided ConfigMap. We currently only have the name in app.Spec.SparkConfigMap. We need to know the keys in the ConfigMap to merge with Spark generated ConfigMap.

I looked around in the codebase, seems like we don't have a client obj to apiserver yet? Do you think we should add one? Or for the subpath approach, we don't need to retrieve the actual ConfigMap spec, just need to patch VolumeMounts. here is a quick patch I tried, I now have spark.properties and hive-site.xml in /opt/spark/conf.

Appreciate your advice to move forward.

wweic avatar Feb 12 '20 11:02 wweic

With the subPath approach, you are basically changing the mount path of the internal ConfigMap to be at a subPath named after the properties file name, right?

liyinan926 avatar Feb 12 '20 20:02 liyinan926

@liyinan926 Yes, currently when we mount a ConfigMap to a directory, every key in the ConfigMap will become a file in the directory automatically, named by the key name. Using subPath, we have to mount each key in ConfigMap manually.

Example subPath spec:

volumeMounts:
        - name: spark-local-dir-1
          mountPath: /var/data/spark-2ce9a3df-c6c0-4b51-ae50-7c1ead092c44
        - name: spark-conf-volume
          mountPath: /opt/spark/conf/spark.properties
          subPath: spark.properties
        - name: spark-configmap-volume
          readOnly: true
          mountPath: /opt/spark/conf/hive-site.xml
          subPath: hive-site.xml

wweic avatar Feb 13 '20 02:02 wweic

@liyinan926 What do you think about the subPath approach? It is a lightweight solution to the problem. But do let me know if there is any design considerations that I miss.

wweic avatar Feb 17 '20 23:02 wweic

Hi, from the GuideDoc SPARK_CONF_DIR by default is set as "/opt/spark/conf" and will be modified to "/etc/spark/conf" when we specify spec.sparkConfigMap. And at the meanwhile "--properties-file /opt/spark/conf/spark.properties" is added automatically. So it seems the internal spark configs and the one under /etc/spark/conf will both be applied but the former one might be override if there are duplicated parameters. This is also what I observed in my experimentations, am I understanding correctly? Thanks!

Bole0331 avatar Apr 19 '20 23:04 Bole0331

We are running into an issue that may potentially be solved by the proposed subPath approach. We need to have a custom hive-site.xml in the config path to connect to Glue data catalog.

Is this issue being worked on actively or planned to be resolved soon?

jaesong-mongo avatar Aug 05 '20 21:08 jaesong-mongo

@jaesong-mongo I have a quick patch here, pending @liyinan926's comment, I can send a more detailed RFC about it.

wweic avatar Aug 17 '20 03:08 wweic

Would love to know the status of this issue.

We build spark docker images from scratch for internal use at my company, and include a /etc/spark/conf/spark-defaults.conf file in docker image to define company-wide defaults. Unfortunately the mounted ConfigMap wipes out the directory including the spark-defaults.conf file. Has this issue been resolved by upgrading to spark 3.0.0 in the latest release?

dfarr avatar Sep 24 '20 16:09 dfarr

Hi all,

I would like to know the status of this. My use case is also same as #786 where I want to have my Spark application talk to AWS glue metastore. Can someone please guide me whether we can set the spark-hive-site using "SparkConf" or any other way?

batCoder95 avatar Sep 29 '20 17:09 batCoder95

@dfarr @batCoder95 We have a simple implementation here that we use internally, based on the changes @wweic suggested, iterating over the configmap keys. We didn't add a PR as we weren't sure it met proper code style / architecture, but it seems to work well for our use case. I'll be happy to PR this if it can help. You can try the image we built here: https://hub.docker.com/r/bbenzikry/spark-eks-operator

bbenzikry avatar Oct 02 '20 19:10 bbenzikry

@batCoder95 Specifically for glue we use something like this:

apiVersion: v1
data:
  hive-site.xml: |
    <configuration>
        <property>
            <name>hive.imetastoreclient.factory.class</name>
            <value>com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory</value>
        </property>
        <property>
          <name>aws.region</name>
          <value>REGION</value>
        </property>
    </configuration>
kind: ConfigMap
metadata:
  namespace: JOB_NAMESPACE
  name: spark-custom-config-map
kind: SparkApplication
metadata:
  name: "whatever"
  namespace: JOB_NAMESPACE
spec:
  sparkConfigMap: spark-custom-config-map
   ...

bbenzikry avatar Oct 02 '20 19:10 bbenzikry

Hi @bbenzikry,

This is really really helpful of you. I'll try out the sample implementation that you have mentioned immediately and let you know if I face any issues :)

Thanks a ton for this guidance :)

batCoder95 avatar Oct 03 '20 14:10 batCoder95

@dfarr @batCoder95 We have a simple implementation here that we use internally, based on the changes @wweic suggested, iterating over the configmap keys. We didn't add a PR as we weren't sure it met proper code style / architecture, but it seems to work well for our use case. I'll be happy to PR this if it can help. You can try the image we built here: https://hub.docker.com/r/bbenzikry/spark-eks-operator

@bbenzikry I deployed this image, it dosn't support kerberos authentication,is there any solution?Thanks!

joanjiao2016 avatar Jan 13 '21 14:01 joanjiao2016

Hi @joanjiao2016, the example image is based on the operator version available at that time - it hasn't been updated with any upstream changes ( specifically for kerberos, I saw there's still a PR open, so I really don't know the status )

@liyinan926 As this is getting some traction, do you want me to PR so it can be available upstream? Unfortunately I have limited time to iterate on this, but the latest rebase I did is available here https://github.com/bbenzikry/spark-on-k8s-operator/tree/hive-subpath-rebased - I'll appreciate you taking a look to see if it's suitable for a PR. Thanks

bbenzikry avatar Jan 13 '21 15:01 bbenzikry

Hi all, Is there any way to solve this problem? MountVolume.SetUp failed for volume "spark-conf-volume" : configmap "spark-45545489ca1f20-driver-conf-map" not found

sanazba avatar May 20 '21 13:05 sanazba

For what it's worth, we've open sourced a fully contained build and docker image for Spark 3.1.1 (with the kubernetes deps), Hadoop 3.2.0, Hive 2.3.7, and this glue client (largely building on @bbenzikry's work): https://github.com/viaduct-ai/docker-spark-k8s-aws

jpugliesi avatar May 26 '21 18:05 jpugliesi

Hi All,

I am trying to make spark operator work with glue meta store since last 2 days without success.

I am slightly new to this domain. Can someone please provide a bit more detailed steps of any solution/workarounds of this problem?

I tried @jpugliesi image for my sparkapplication but my jobs are still failing to with MetaException

Caused by: MetaException(message:Version information not found in metastore. )
	at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7810)
	at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7788)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy47.verifySchema(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:595)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:588)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:655)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:431)

nimahajan avatar Jul 19 '21 17:07 nimahajan

For what it's worth, we've open sourced a fully contained build and docker image for Spark 3.1.1 (with the kubernetes deps), Hadoop 3.2.0, Hive 2.3.7, and this glue client (largely building on @bbenzikry's work): https://github.com/viaduct-ai/docker-spark-k8s-aws

Hi @jpugliesi - just wanted to say I love this. I changed our own docker builds to use a slightly modified version of your solution, and it has been a great experience. I'll be changing my spark/eks repo to reflect this as my main method. Great job.

bbenzikry avatar Aug 18 '21 00:08 bbenzikry