spark-operator
spark-operator copied to clipboard
Kubernetes operator for managing the lifecycle of Apache Spark applications on Kubernetes.
Anyone want me to support this automountServiceAccountToken feature into spark operator?
It seems that the latest version of spark-operator is not compatible with K8S 1.26 as there are some APIs used in spark-operator which are deprecated.
Unable to do "sparkctl create" in Windows with spec.mainApplicationFile and/or spec.deps referencing local files. Sample below ``` spec: mainApplicationFile: "C://mypath\\myfile.py" ... deps: pyFiles: - "C://mypath\\mydep.py" ... ``` Output from "sparkctl...
when I try to set up sparkconf on the sparkapplication manifest seems like the spark.master doesn't get set when I use the sparksession. example: spark application manifest yaml sparkConf: spark.master:...
This is yaml of my spark job: `kind: SparkApplication metadata: name: operatordc1 namespace: spark spec: type: Java mode: cluster image: "xiotxpcdevcr.azurecr.io/spark-custom:release-8.0" imagePullPolicy: Always imagePullSecrets: - mdsp-secret-spark mainClass: "org.springframework.boot.loader.JarLauncher" mainApplicationFile: "local:///opt/spark/examples/jars/operatordc1.jar"...
## Purpose of this PR Close #1959 **Proposed changes:** - `hack/gencerts.sh` will not be used to generate certificates any more, operator is responsible for generating CA certificate and server certificate...
- [x] ✋ I have searched the open/closed issues and my issue is not listed. #### Please describe your question here I opened the project with GoLand, and got some...
- [ ] ✋ I have searched the open/closed issues and my issue is not listed. #### [I initiated a test task, but the driver pod showed that configmap was...
## Purpose of this PR Confusing warning in the Python example that Python support is experimental. Python support seems stable in current versions, correct? **Proposed changes:** Remove outdated (?) warning...
### 🛑 Important: Please open an issue to discuss significant work before you start. We appreciate your contributions and don't want your efforts to go to waste! For guidelines on...