terraform-provider-ibm
terraform-provider-ibm copied to clipboard
Cannot use "daemon" run_task option in ibm_code_engine_job due to schema validation defaults
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform CLI and Terraform IBM Provider Version
terraform1.5 Installed ibm-cloud/ibm v1.58.0
Using schematics workspaces on IBM cloud
Affected Resource(s)
- ibm_code_engine_job
Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
resource "ibm_code_engine_job" "ce_worker_job" {
project_id = data.ibm_code_engine_project.code_engine_project_instance.project_id
name = "ce-job"
image_reference = "kennethreitz/httpbin:latest"
run_mode = "daemon"
scale_cpu_limit = 1
scale_memory_limit = "4G"
}
Debug Output
# ibm_code_engine_job.ce_worker_job will be created
+ resource "ibm_code_engine_job" "ce_worker_job" {
+ created_at = (known after apply)
+ entity_tag = (known after apply)
+ etag = (known after apply)
+ href = (known after apply)
+ id = (known after apply)
+ image_reference = "kennethreitz/httpbin:latest"
+ job_id = (known after apply)
+ name = "ce-job"
+ project_id = "0d7ea407-f1f7-4555-a9b0-f83371a0c16a"
+ resource_type = (known after apply)
+ run_as_user = 0
+ run_mode = "daemon"
+ run_service_account = "default"
+ scale_array_spec = "0"
+ scale_cpu_limit = "1"
+ scale_ephemeral_storage_limit = "400M"
+ scale_max_execution_time = 7200
+ scale_memory_limit = "4G"
+ scale_retry_limit = 3
}
Error: CreateJobWithContext failed Bad payload - The field 'scale_max_execution_time' is invalid. reason: INVALID
{
"StatusCode": 400,
"Headers": {
"Cache-Control": [
"no-cache, no-store"
],
"Content-Length": [
"284"
],
"Content-Type": [
"application/json; charset=UTF-8"
],
"Date": [
"Fri, 10 Nov 2023 23:03:37 GMT"
],
"Strict-Transport-Security": [
"max-age=31536000; includeSubDomains; preload"
],
"X-Content-Type-Options": [
"nosniff"
],
"X-Global-Transaction-Id": [
"codeengine-api-b677691e38e14c6f856062bf9c297056"
]
},
"Result": {
"errors": [
{
"code": "field_invalid",
"message": "Bad payload - The field 'scale_max_execution_time' is invalid. reason: INVALID",
"target": {
"name": "scale_max_execution_time",
"reason": "INVALID",
"type": "field"
}
}
],
"status_code": 400,
"trace": "codeengine-api-b677691e38e14c6f856062bf9c297056"
},
"RawResult": null
}
Panic Output
Expected Behavior
When using run_mode = daemon
, it should not set any default values for scale_max_execution_time
and scale_retry_limit
.
Documentation for the run_mode ibm_code_engine_job resource
Actual Behavior
Steps to Reproduce
-
terraform apply
Important Factoids
None
References
It looks like scale_max_execution_time
's schema validation code just puts it as a default value, regardless of the value of run_mode
Same for scale_retry_limit
's schema validation code
Hey @bobfang,
I'll try to get a fix in as soon as I can. Unfortunately it might be a while until it is in. In case I find a workaround, I'll message you. Thanks for reporting.