spark-operator
spark-operator copied to clipboard
Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Hi All, I am trying sample python application to deploy PVC for every executor. Dynamically created PVC gets mounted to executor pod using SparkOperator but failing to process with error...
We are experiencing this `The node was low on resource: ephemeral-storage. Container spark-kubernetes-driver was using 48136Ki, which exceeds its request of 0.` error and found the pod evicted. I have...
In order to deploy PVC with every spark executor, it is needed to use multiple configurations mentioned in spark documentation - From spark documentation, these are the needed configurations to...
As far as I can see there currently there is no possibility to define spark or hadoop config property from secrets. This gets rather critical when it is a property...
Hi all. I installed the operator and volcano. Both was installed with Helm 3. The installation was successful. Right now I'm trying to deploy a spark application taked from your...
background: [#164] issue reason: 1. `Spark history server` has not a stable available chart since helm/chart repo archived 2. As a user of `spark-on-k8s-operator`, there is a very high probability...
I have installed operator.when I run examples spark-py-pi.yaml,I find that driver can be launched and running,but executor pod can't be found. the log of driver shows that: `++ id -u...
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/tree/master/pkg/apis/sparkoperator.k8s.io The group is reserved for K8s core CRDs and with this KEP https://github.com/deads2k/enhancements/blob/7d8375fec4b9a3b48aad39dbc2cf4059e6ec67d6/keps/sig-api-machinery/20190612-crd-group-protection.md it requires approval to install such crds in the cluster. THoughts on what will it take...
This functional requirement is very important. For example, if there are CPU machines and GPU machines in the node, node scheduling must be required!! 
Due to the way my infrastructure is set up, I need to be able to run the spark ui in [headless mode](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services). This requires being able to configure the ui...