spark-operator
spark-operator copied to clipboard
Operator for managing the Spark clusters on Kubernetes and OpenShift.
Hello, The operator was installed in our openshift cluster (organization). When the example spark application (spark-examples_2.11-2.4.5.jar) was submitted with the help of operator, submitter pod and driver pod was getting...
Hello, Unable to install spark operator in Openshift 4.7 cluster and operator image doesn't have digests(SHA256 fingerprint). The image path showing as Tag Image( quay.io/radanalyticsio/spark-operator:1.1.0). It throws Image pull back...
### Description: I deployed the radanalytics/spark-operator on OKD 4.6 (using OpenDataHub, you can find the full ODH manifests we are using here: https://github.com/MaastrichtU-IDS/odh-manifests) From this spark-operator I started a Spark...
Hi, is there any milestone for supporting spark 3.1.2 ?
### Description: I am using this spark operator in Openshift to create a Spark cluster which is successful and I am able to see all the workers connected to the...
### Description: I'm trying to access ceph storage, located locally on OpenShift cluster, but I'm using: spark.hadoop.fs.s3a.path.style.access=true but when job is run I get: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:com.amazonaws.AmazonClientException: Unable to execute HTTP...
### Problem Description Kubernetes has been deprecating API(s), which will be removed and are no longer available in 1.22. Operators projects using [these APIs](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22) versions will **not** work on Kubernetes...
Currently it is not possible to set affinity/antyaffinity/nodeSelector/nodeName on master and workers pods in cluster. It would be nice if the spark-operator had a feature to choose Kubernetes nodes where...
spark session creation error ## IPYNB Jupyter notebook ``` from pyspark import SparkContext from pyspark.sql import SparkSession, HiveContext #config = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '5'), ('spark.cores.max', '16'), ('spark.driver.memory','8g')]) spark = SparkSession...
### Description: Is there an option to add volume or increase ephemeral storage for the spark application #### Steps to reproduce: ``` apiVersion: radanalytics.io/v1 kind: SparkApplication metadata: name: xxx namespace:...