spark-operator icon indicating copy to clipboard operation
spark-operator copied to clipboard

Example is failed with error /opt/entrypoint.sh: line 40: /tmp/java_opts.txt: Permission denied

Open mgorbov opened this issue 4 years ago • 8 comments

Example: https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/examples/spark-pi.yaml

Operator values

 operatorVersion: v1beta2-1.2.1-3.0.0
imagePullSecrets:
  - name: harbor-credentials
rbac:
  create: true
serviceAccounts:
  spark:
    name: spark
controllerThread: 2
enableWebhook: true
webhookPort: 443
nodeSelector:
  destiny: spark
enableMetrics: true

Driver logs

++ id -u
+ myuid=185
++ id -g
+ mygid=0
+ set +e
++ getent passwd 185
+ uidentry=
+ set -e
+ '[' -z '' ']'
+ '[' -w /etc/passwd ']'
+ echo '185:x:185:0:anonymous uid:/opt/spark:/bin/false'
+ SPARK_CLASSPATH=':/opt/spark/jars/*'
+ env
+ grep SPARK_JAVA_OPT_
+ sort -t_ -k4 -n
+ sed 's/[^=]*=\(.*\)/\1/g'
/opt/entrypoint.sh: line 40: /tmp/java_opts.txt: Permission denied

mgorbov avatar Nov 20 '20 08:11 mgorbov

Seems related to https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/issues/1056.

liyinan926 avatar Dec 08 '20 06:12 liyinan926

Hi, Getting the same error message even though I haven't used volume mounts to /tmp. kubectl execution of Spark application gets successfully completed when run in default namespace and gives the below error when run in non-default namespace. I do have rbac.yml (having serviceaccount, clusterrole & clusterrolebinding) defined for execution in the non-default namespace.

Pasted the error msg below. Appreciate your advise in overcoming the error.

++ id -u

  • myuid=0 ++ id -g
  • mygid=0
  • set +e ++ getent passwd 0
  • uidentry=root:x:0:0:root:/root:/bin/bash
  • set -e
  • '[' -z root:x:0:0:root:/root:/bin/bash ']'
  • SPARK_CLASSPATH=':/opt/spark/jars/*'
  • env
  • grep SPARK_JAVA_OPT_
  • sort -t_ -k4 -n
  • sed 's/[^=]=(.)/\1/g' /opt/entrypoint.sh: line 40: /tmp/java_opts.txt: Permission denied

arun990 avatar Jun 19 '22 12:06 arun990

Hi, Can someone provide updates on this please to overcome the issue of pemissions denied..

arun990 avatar Jun 20 '22 13:06 arun990

You can try the following lines if you are building your own spark image:

USER root RUN echo "" > /tmp/java_opts.txt RUN chmod 777 /tmp/java_opts.txt RUN chown root:root /tmp/java_opts.txt

tanguanhong89 avatar Oct 11 '22 16:10 tanguanhong89

I met the same problem. When I set "webhook. enable=true", I modified Spark's official Dockerfile, as follows:

Comment out two line

#ARG spark_uid=185 #USER ${spark_uid}

chq3272991 avatar Mar 16 '23 08:03 chq3272991

Hi, any solution here?

khaledosman1 avatar Aug 15 '23 12:08 khaledosman1