kuberay icon indicating copy to clipboard operation
kuberay copied to clipboard

[Bug] Autoscaler sideacr crashes, bringing down head pod, if request exceeds max pod replicas

Open HarryCaveMan opened this issue 1 year ago • 5 comments
trafficstars

Search before asking

  • [X] I searched the issues and found no similar issues.

KubeRay Component

ray-operator

What happened + What you expected to happen

What Happened

Ray cluster has worker max replicas set to 10, each pod takes up an entire GPU instance with 1 GPU. Submit two jobs, each one:

  • PG with a bundle [{"CPU":1,"GPU":1}]*10 and strategy STRICT_SPREAD, then call pg.wait(timeout_seconds=600) before calling map_batches
  • Dataset.map_batches with num_gpus set to 1, this results in 20 GPU nodes being requested, PG scheduling strategy. The autoscaler container fails and the entire head pod crashes bring down the whole cluster.

What I expect to happen

The autoscaler request fails to retrieve the resources but does not crash, So the placement group will continue waiting until the resoruces are available on the cluster.

Reproduction script

Set up kuberay cluster with workergroupspec.maxreplicas=2 where each node has only one available GPU

Run the following:

import ray
from ray.util.placement_group import (
    placement_group,
    remove_placement_group,
)
from ray.util.scheduling_strategies import PlacementGroupSchedulingStrategy
# concurrency only used to simulate two jobs being submitted at once asynchronously
from concurrent.futures import ThreadPoolExecutor,wait


pool = ThreadPoolExecutor(max_workers=2)


ds1 = ray.data.from_items(
    [{"col1": i, "col2": i * 2} for i in range(1000)]
)

ds2 = ray.data.from_items(
    [{"col1": i, "col2": i * 3} for i in range(1000)]
)

def run_transform(batch):
    batch['col2'] *= 2
    return batch

pg_bundle_1 = [{"CPU":1,"GPU"}]*1
pg1 = placement_group(pg_bundle_1,strategy="STRICT_SPREAD")
scheduling_strategy_1 = PlacementGroupSchedulingStrategy(pg1)

jobs = []

jobs.append(pool.submit(
   ds1.map_batches,
    run_transform,
    batch_size=1000
    num_cpus=1,
    num_gpus=1,
    concurrency=3
    scheduling_strategy=scheduling_strategy_1
))

pg_bundle_2= [{"CPU":1,"GPU"}]*2
pg2 = placement_group(pg_bundle_2,strategy="STRICT_SPREAD")
scheduling_strategy_2 = PlacementGroupSchedulingStrategy(pg2)

jobs.append(pool.submit(
    ds2.map_batches,
    run_transform,
    batch_size=500
    num_cpus=1,
    num_gpus=1,
    concurrency=3
    scheduling_strategy=scheduling_strategy_2
)

wait(jobs)

They key is that your bundle items must use the whole node, no one bundle can request more nodes than max_replicas, but the sum of all bundles exceeds it.

Anything else

I did notice that kuberay does not seem to use the max_workers cluster config anywhere, perhaps it needs to be added to the operator when launching the head pod and exposed into the manifest.

Are you willing to submit a PR?

  • [ ] Yes I am willing to submit a PR!

HarryCaveMan avatar Sep 16 '24 18:09 HarryCaveMan

Hello @HarryCaveMan , I'm trying to replicate this issue. Can you tell me which version of Ray you used? And how did you run the reproduction script? Did you use something like "kubectl exec". Thanks a lot!

ysfess22 avatar Oct 21 '24 23:10 ysfess22

@kevin85421 FYI if you have any insights to this issue? Thanks

Superskyyy avatar Oct 22 '24 22:10 Superskyyy

I'll try to reproduce this issue this week

andrewsykim avatar Oct 23 '24 00:10 andrewsykim

@HarryCaveMan can you share the Ray version you used when reproducing the issue?

andrewsykim avatar Oct 23 '24 16:10 andrewsykim

Hello @HarryCaveMan , I'm trying to replicate this issue. Can you tell me which version of Ray you used? And how did you run the reproduction script? Did you use something like "kubectl exec". Thanks a lot! I will try to get you a minimal repo with a minimal manifest you can use kubectl apply to run.

HarryCaveMan avatar Oct 24 '24 01:10 HarryCaveMan

@HarryCaveMan is this related to https://github.com/ray-project/ray/pull/48924?

kevin85421 avatar Dec 18 '24 18:12 kevin85421

cc @rueian

kevin85421 avatar Feb 25 '25 02:02 kevin85421

I will take a look at this.

rueian avatar Feb 25 '25 02:02 rueian

Just reproduced the issue. The issue is the same as https://github.com/ray-project/ray/issues/39691

rueian avatar Feb 27 '25 00:02 rueian