dsub icon indicating copy to clipboard operation
dsub copied to clipboard

google-batch quota error does not trigger job failure

Open rivershah opened this issue 1 year ago • 6 comments

Using the google-batch provider, I notice that some batch errors are not propagating to dsub and it continues waiting to run jobs, when when should be aborting

$ dstat --provider google-batch --project <PROJECT_ID> --location <REGION> --jobs '<JOB_ID>' --users '<USER>' --status '*' --format json
[
  {
    "job-name": "<JOB_NAME>",
    "task-id": "<TASK_ID>",
    "last-update": "2025-01-01 13:07:03.664000",
    "status-message": "VM in Managed Instance Group meets error: Batch Error: code - CODE_GCE_QUOTA_EXCEEDED, description - error count is 4, latest message example: Instance '<INSTANCE_ID>' creation failed: Quota 'GPUS_PER_GPU_FAMILY' exceeded.  Limit: 0.0 in region <REGION>."
  }
]

The process that launched it has retries=0, yet it still shows no failure and is patiently

Waiting for job to complete...
Monitoring for failed tasks to retry...
*** This dsub process must continue running to retry failed tasks.

rivershah avatar Jan 01 '25 13:01 rivershah

Hi @rivershah ,

This appears to be working as intended. The idea is that the quota issue is resolvable (either by resources becoming available or user allocating more quota), and then the job continues. For example, imagine submitting 100 jobs when we only have quota to do 50. Once the first 50 finish, we'd want the next 50 to run.

Perhaps better documentation on this should be added.

wnojopra avatar Jan 07 '25 17:01 wnojopra

This risks starvation. What is a graceful way to trigger fast failure / timeout please? For example we submit jobs on large gpu machines which can go without availability for days

rivershah avatar Jan 07 '25 20:01 rivershah

Ideally, you could make use of dsub's --timeout flag. It's implemented for the google-cls-v2 provider, but unfortunately not yet for the google-batch providers. The good news is the Batch API has support for a timeout, so it should be a simple passthrough for dsub.

wnojopra avatar Jan 08 '25 23:01 wnojopra

Excellent, requesting that we please implement this

rivershah avatar Feb 17 '25 09:02 rivershah

For some gpu machines I get events like this:

STATUS_CHANGED  2025-10-01T15:45:28.439698938Z
Job state is set from SCHEDULED to RUNNING for job projects/[PROJECT_NUMBER]/locations/us-central1/jobs/[JOB_ID].

OPERATIONAL_INFO  2025-10-01T15:37:50.018Z
VM in Managed Instance Group meets error: Batch Error: code - CODE_GCE_ZONE_RESOURCE_POOL_EXHAUSTED, description - error count is 5, latest message example: Instance '[INSTANCE_NAME]' creation failed: The zone 'projects/[PROJECT_ID]/zones/europe-west2-a' does not have enough resources available to fulfill the request. '(resource type:compute)'.

STATUS_CHANGED  2025-10-01T15:29:15.489292753Z
Job state is set from QUEUED to SCHEDULED for job projects/[PROJECT_NUMBER]/locations/us-central1/jobs/[JOB_ID].

Can you point to me in the code where the retry logic for this is. Seems like this is server side retry and not client side

For example error count is 5 and then when a few minutes later such a machine became available, google-batch allocated it. How do I tell google-batch to keep trying till timeout hits.

This error: CODE_GCE_ZONE_RESOURCE_POOL_EXHAUSTED seems to cause a crashout well before timeout is hit

rivershah avatar Oct 01 '25 16:10 rivershah

Hey @rivershah!

I agree, that does look like a server-side error and retry. Batch documentation indicates that retries are configurable:

https://cloud.google.com/batch/docs/automate-task-retries

You can configure automatic task retries for each task when you create a job. Specifically, for each task, you can use one of the following configuration options:

  • By default, each task is not retried when it fails.
  • Retry tasks for all failures: You can configure the maximum times to automatically retry failed tasks. You can specify between 0 (default) and 10 retries.
  • Retry tasks for some failures: You can configure different task actions—either automatic retry or fail without retry—for specific failures. The opposite action is taken for all unspecified failures. Specific failures can each be identified by an exit code that is defined by your application or Batch.

Interesting that it says the retry is zero, yet your example appears to have retried 5 times.

The place where the retry would be configured is in the TaskSpec, which dsub creates here:

https://github.com/DataBiosphere/dsub/blob/main/dsub/providers/google_batch_operations.py#L164

That seems like the right place to address this. You should also be able to use dsub's client-side `--retries':

https://github.com/DataBiosphere/dsub/blob/main/docs/retries.md

Let us know how it goes!

mbookman avatar Oct 01 '25 23:10 mbookman