k8s-spark-scheduler icon indicating copy to clipboard operation
k8s-spark-scheduler copied to clipboard

executor pod schedule stucked with enough resource

Open askeySnip opened this issue 4 years ago • 2 comments

when i submit a batch of spark jobs, it runs doesn't like the expection. Some executor pods stucking although there are enough resources in each node for it to run. It annoyed me, and I wonder if there is something that I don't considered. ps. I run these spark jobs like the example and it works ok for running a single job

askeySnip avatar Oct 28 '20 13:10 askeySnip

@askeySnip can you describe the stuck driver pods and share the scheduling errors?

onursatici avatar Dec 10 '20 11:12 onursatici

I encountered similar issue.

There are two nodes in my cluster. Also, there are two spark jobs and the total resources they required are larger than the cluster (i.e the k8s cluster can't run both jobs concurrently). If I submit second job after all executors of first job are running, it works well. However, some pods get hang (see following screenshot) if I submit two jobs at the same time. I traced the log and it seems that scheduler predicate the node (assign the resource) for both jobs at the same time. Hence, some pods can't get enough resources.

截圖 2021-07-10 下午7 57 28 截圖 2021-07-10 下午8 00 05

Is it the expected behavior? Can it be configured that scheduler predicates second job only if first job has been scheduled successfully? Or we should NOT submit jobs at the same time?

chia7712 avatar Jul 10 '21 12:07 chia7712