k8s-spark-scheduler
k8s-spark-scheduler copied to clipboard
executor pod schedule stucked with enough resource
when i submit a batch of spark jobs, it runs doesn't like the expection. Some executor pods stucking although there are enough resources in each node for it to run. It annoyed me, and I wonder if there is something that I don't considered. ps. I run these spark jobs like the example and it works ok for running a single job
@askeySnip can you describe the stuck driver pods and share the scheduling errors?
I encountered similar issue.
There are two nodes in my cluster. Also, there are two spark jobs and the total resources they required are larger than the cluster (i.e the k8s cluster can't run both jobs concurrently). If I submit second job after all executors of first job are running, it works well. However, some pods get hang (see following screenshot) if I submit two jobs at the same time. I traced the log and it seems that scheduler predicate the node (assign the resource) for both jobs at the same time. Hence, some pods can't get enough resources.


Is it the expected behavior? Can it be configured that scheduler predicates second job only if first job has been scheduled successfully? Or we should NOT submit jobs at the same time?