volcano
volcano copied to clipboard
Is it possible to use task topology in spark client mode?
I've a cluster where I create the driver pod as a task of a job in client mode. This driver will create the executors as requested, but I have the following questions: Will these executors be scheduled as tasks? Or they will be scheduled as pods by the k8s? Also, I want that both driver and created executors reside in the same AZ so I was thinking to use task-topology to do that. However, I think that task topology will only work if both driver and executors are tasks, is that correct? This is why I want to know if in this scenario the executors created by the driver in client mode are task or not.
cc @Yikun
Is there any update about this?