spark-operator
spark-operator copied to clipboard
Support setting driver/executor `memory` and `memoryLimit` separately
Spark's kubernetes executor uses the same value for memory request and memory limit, and the current operator API matches that: although we have both cores
and coreLimit
, there is only memory
.
However, in some cases, it can be useful to set a memory request that is lower than the memory limit, as a form of over-subscription, as Spark tasks will not always use all of the requested memory, and it can help increase overall cluster memory utilization.
In extreme cases, this could also be seen as a counter-part to how you can disable memory enforcement in YARN clusters, although kubernetes would give us here the opportunity to tune this much more finely than just enabling or disabling memory enforcement.
My assumption is that the operator could support different limit and request values even though Spark doesn't through the mutating webhook.
+1
any progress on this issue?
Also interesting about progress
+1
+1
+1