spark-operator icon indicating copy to clipboard operation
spark-operator copied to clipboard

spec.driver.configmaps not work

Open mgyboom opened this issue 2 years ago • 0 comments

hi, @liyinan926 I tried to mount the configmap to the sparkapplication, but it didn't work, according to this one: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#mounting-configmaps I have enabled webhook! show my sparkapplication.yaml.

apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
  annotations:
    kubesphere.io/creator: admin
    meta.helm.sh/release-name: spark-test-apsi
    meta.helm.sh/release-namespace: psi-project
  labels:
    app.kubernetes.io/instance: spark-test-apsi
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/version: "1.1"
    app.kubesphere.io/instance: spark-test-apsi
    helm.sh/chart: spark-datahandler-0.4.3
  name: spark-test-apsi
  namespace: psi-project
spec:
  arguments:
  - '{"input":{"type":"file","fileSystem":"s3a://mgy","path":"","filename":"60w_3id_AtypeSM3.csv"},"output":{"type":"file","fileSystem":"s3a://mgy","path":"/output/20220825213600"},"shardingKey":"idcard_SM3","shardingNum":16,"preprocess":true}'
  driver:
    configMaps:
    - name: apsi-config
      path: /opt/spark/work-dir
    cores: 1
    env:
    - name: SPARK_LOCAL_DIRS
      value: /export
    memory: 1g
    memoryOverhead: 1g
    serviceAccount: spark-datahandler
    volumeMounts:
    - mountPath: /export
      name: spark-local-dir-tmp
  dynamicAllocation:
    enabled: true
    initialExecutors: 0
    maxExecutors: 4
    minExecutors: 0
  executor:
    cores: 2
    env:
    - name: SPARK_LOCAL_DIRS
      value: /export
    memory: 4g
    memoryOverhead: 1g
    volumeMounts:
    - mountPath: /export
      name: spark-local-dir-tmp
  hadoopConf:
    fs.ftp.data.connection.mode: PASSIVE_LOCAL_DATA_CONNECTION_MODE
    mapreduce.fileoutputcommitter.marksuccessfuljobs: "false"
  image: spark-datahandler:1.1.0
  imagePullPolicy: IfNotPresent
  mainApplicationFile: local:///opt/spark/work-dir/spark-datahandler.jar
  mainClass: cn.tongdun.sparkdatahandler.Sharding
  memoryOverheadFactor: "0.2"
  mode: cluster
  restartPolicy:
    type: Never
  sparkConf:
    spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version: "2"
    spark.hadoop.mapreduce.fileoutputcommitter.cleanup-failures.ignored: "true"
    spark.hadoop.parquet.enable.summary-metadata: "false"
    spark.speculation: "false"
    spark.sql.hive.metastorePartitionPruning: "true"
    spark.sql.parquet.filterPushdown: "true"
    spark.sql.parquet.mergeSchema: "false"
    spark.sql.parquet.output.committer.class: org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
    spark.sql.sources.commitProtocolClass: org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
  sparkVersion: 3.1.1
  timeToLiveSeconds: 86400
  type: Java
  volumes:
  - hostPath:
      path: /data/spark-local-dir
    name: spark-local-dir-tmp
  - configMap:
      items:
      - key: apsi-config.json
        path: apsi-config.json
      name: apsi-config
    name: apsi-config

mgyboom avatar Aug 25 '22 13:08 mgyboom