Spark Operator Roadmap 2024
Roadmap
Creating this roadmap issue to track work items that we will do in the future. If you have any ideas, please leave a comment.
Features
- [x] Pod template support (#2101)
- [x] #2502
- [ ] Improve controller performance
- [x] Spark Connect Support (#1801)
- [ ] Rest API for submitting jobs
- [x] Cert manger support (#1178)
Chores
- [ ] Doc improvement
- [x] Improve test coverage to improve the confidence in releases, particularly with e2e tests
Some ideas:
- A new CR to support Spark Connect
- A HTTP API for job submission
- A web UI for visibility into currently running applications
- Deprecate the need for a mutating webhook by moving all functionality into the pod template
- Controller performance improvements and recommendations for large scale clusters
Chores:
- Improve test coverage to improve the confidence in releases, particularly with e2e tests
- Doc improvements
Upgrade default security posture Remove reliance on userid 185 (seems it's connected to the krb5.conf file leveraging domains and realms of institutions that may not need it).
@jacobsalway @ChenYi015 I think that "Deprecate the need for a mutating webhook by moving all functionality into the pod template" should be a top priority, especially with the upcoming release of Spark v4
@bnetzi , @vara-bonthu , regarding the point 'referring you to the discussion here, I think we just need to provide in general more options to configure the controller runtime, and that my PR is irrelevant',
Does it mean ' one queue per app and one go routine per app'(https://github.com/kubeflow/spark-operator/pull/1990) is not a solution for the performance issue faced?
Is https://github.com/kubeflow/spark-operator/pull/2186 solution for the same?
Do we see performance opportunity improvement with the approach that we have tried? - https://github.com/kubeflow/spark-operator/issues/1574#issuecomment-1699668815 Summary of changes : - port spark-submit to golang - this removes JVM invocation, hence, performance-wise faster - no dependency on apache spark(as the frequency and quantity of changes to driver pod going to be minimal in future releases of apache spark). We are happy to contribute our effort in this context to open source. Porting of spark-submit to golang is well-tested in our setup. Please let us know.
@gangahiremath - I think the two improvements aren't mutually exclusive - Given the testing done by @bnetzi and captured in this document - it seems that the one mutex per queue does have performance benefits. I also think that using Go instead of Java based submission can also help reduce job submission latency. However, as pointed out by @bnetz using Go would require corresponding changes to spark operator whenever there are changes to spark-submit and may also introduce functionality gaps. We can probably include both improvements in the roadmap if the performance hit from JVM is significant enough.
It would be great if other users can share/comment if JVM spin-up times indeed were a contributor to job submission latency? Also, if anyone tweaked/optimized JVM specifically to alleviate this pain point? Thanks.
@gangahiremath - I think the two improvements aren't mutually exclusive - Given the testing done by @bnetzi and captured in this document - it seems that the one mutex per queue does have performance benefits. I also think that using Go instead of Java based submission can also help reduce job submission latency. However, as pointed out by @bnetz using Go would require corresponding changes to spark operator whenever there are changes to spark-submit and may also introduce functionality gaps. We can probably include both improvements in the roadmap if the performance hit from JVM is significant enough.
It would be great if other users can share/comment if JVM spin-up times indeed were a contributor to job submission latency? Also, if anyone tweaked/optimized JVM specifically to alleviate this pain point? Thanks.
@c-h-afzal , FYI point So the way I see it - work queue per app might not longer be the solution by bnetzi in thread https://github.com/kubeflow/spark-operator/pull/1990#issuecomment-2412950198.
Some update - we are still working on v2 compatible performance enhancements, we expect to update mid December with our results.
As for Deprecation of webhook - my concern is with changes that are not applicable via pod templates. Quote from spark docs (https://spark.apache.org/docs/3.5.3/running-on-kubernetes.html#pod-template):
It is important to note that Spark is opinionated about certain pod configurations so there are values in the pod template that will always be overwritten by Spark. Therefore, users of this feature should note that specifying the pod template file only lets Spark start with a template pod instead of an empty pod during the pod-building process. For details, see the full list of pod template values that will be overwritten by spark.
For example - I personally think spark are doing a huge mistake in preventing users configuring different memory request and memory limit. It is a valid config for many use cases. With webhook we are able to override this behavior - I have an example here which I intended to push as a part of The performance PR https://github.com/kubeflow/spark-operator/pull/1990/files, You can see the changes I did in patch.go
In our env where there are very high memory peaks but low memory use on average, it saved a huge over allocation and the work overhead of configuring the memory request in an efficient way per spark app.
Unfortunately I can't participate in the community meeting as it is impossible for my timezone, so I hope my voice will be heard here
@vara-bonthu
issue needs to be updated with the features that will be part of release 1.10
Adding to @jacobsalway 's idea list:
- Support for spark-history-server
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Do you have a periodic release schedule?
Thank you for this great ROADMAP @ChenYi015!
@jacobsalway @ChenYi015 @yuchaoran2011 @vara-bonthu @nabuskey It would be great to convert Spark Operator 2024/2025 ROADMAP into https://github.com/kubeflow/spark-operator/blob/master/ROADMAP.md file, so it would be easier for users to track the items.
Agree! We will add that doc.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
/lifecycle frozen