prefect
prefect copied to clipboard
importlib._bootstrap_external>", line 1073, in get_data FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpz0cmn1ddprefect/AOPS_SQL_Workflow_v1.py'
First check
- [X] I added a descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the Prefect documentation for this issue.
- [X] I checked that this issue is related to Prefect and not one of its dependencies.
Bug summary
When I use REST API with Scheduler enabled (e.g., RRULE), I get the following error in Prefect Orion 2.4. It was working fine in 2.0. Even though I have dedicated storage directory and REST API way of running workflow is pointing to default /tmp folder, not the one I have it using the parameter PREFECT_LOCAL_STORAGE_PATH. But If I run using .yml, it uses the path specified in PREFECT_LOCAL_STORAGE_PATH and it works fine. Please let me know what am I missing in the payload?
Flow could not be retrieved from deployment. Traceback (most recent call last): File "
Payload used:
2022-09-24 16:40:54.170 DEBUG 18568 --- [http-nio-5080-exec-962] com.aops.waves.service.FlowService : jsonObject.toJSONString():
{
"parameter_openapi_schema": {
"type": "object",
"title": "Parameters",
"properties": {
"kwargs": "{"type": "string", "title":"kwargs"}"
},
"required": [
"kwargs"
]
},
"infrastructure_document_id": "736c6e6f-6c03-40fa-8ea4-71f946a05343",
"infra_overrides": {},
"description": "AOPS_SQL_Workflow_DS_12",
"version": "1",
"work_queue_name": "waves_q",
"tags": [
"waves_q"
],
"path": "/localpart0/aop-shared/WAVES/workflows/",
"schedule": {
"rrule": "DTSTART:20220924T124300\nRRULE:FREQ=HOURLY;INTERVAL=1;COUNT=1;UNTIL=20220924T124500",
"timezone": "US/Eastern"
},
"flow_id": "d7202f6b-b929-4139-9e4d-73e2636a3fe0",
"entrypoint": "AOPS_SQL_Workflow_v1.py:AOPS_SQL_Workflow",
"name": "AOPS_SQL_Workflow_DS_12",
"parameters": {
"kwargs": {
"sqltype": 1,
"date_range": "CURRENT_DATE - INTERVAL '1 months'",
"dbname": "gpprod",
"selection": "count()",
"flow_name": "AOPS_SQL_Workflow",
"rpt_flag": "1",
"tab1": "whse.dim_company",
"schd_run_name": "AOPS_SQL_Workflow_DS_12",
"sql": "SELECT {selection} FROM {tab1} WHERE show_in_report_flag = {rpt_flag} and (creation_date > ({date_range}));"
}
},
API Response:
{
"id": "ba109826-6846-4e0e-815b-ee905c593dab",
"created": "2022-09-23T19:49:43.563714+00:00",
"updated": "2022-09-24T16:40:54.208488+00:00",
"name": "AOPS_SQL_Workflow_DS_12",
"version": "1",
"description": "AOPS_SQL_Workflow_DS_12",
"flow_id": "d7202f6b-b929-4139-9e4d-73e2636a3fe0",
"schedule": {
"rrule": "DTSTART:20220924T124300\nRRULE:FREQ=HOURLY;INTERVAL=1;COUNT=1;UNTIL=20220924T124500",
"timezone": "US/Eastern"
},
"is_schedule_active": true,
"infra_overrides": {},
"parameters": {
"kwargs": {
"sql": "SELECT {selection} FROM {tab1} WHERE show_in_report_flag = {rpt_flag} and (creation_date > ({date_range}));",
"tab1": "whse.dim_company",
"dbname": "gpprod",
"sqltype": 1,
"rpt_flag": "1",
"flow_name": "AOPS_SQL_Workflow",
"selection": "count()",
"date_range": "CURRENT_DATE - INTERVAL '1 months'",
"schd_run_name": "AOPS_SQL_Workflow_DS_12"
}
},
"tags": [
"waves_q"
],
"work_queue_name": "waves_q",
"parameter_openapi_schema": {
"type": "object",
"title": "Parameters",
"required": [
"kwargs"
],
"properties": {
"kwargs": "{"type": "string", "title":"kwargs"}"
}
},
"path": "/localpart0/aop-shared/WAVES/workflows/",
"entrypoint": "AOPS_SQL_Workflow_v1.py:AOPS_SQL_Workflow",
"manifest_path": null,
"storage_document_id": null,
"infrastructure_document_id": "736c6e6f-6c03-40fa-8ea4-71f946a05343"
}
"is_schedule_active": "true"
}
If we run a workflow for every minute for 5 minutes, it fails sporadically by showing the following error:
Flow could not be retrieved from deployment.
Traceback (most recent call last):
File "
Reproduction
{}
Error
# Copy complete stack trace and error message here, including log output if applicable.
Versions
# Copy output of `prefect version` here
Prefect Orion 2.4.2
Additional context
No response
Please let me know of there are any workaround. This bug made our entire system down.
This issue is stale because it has been open 30 days with no activity. To keep this issue open remove stale label or comment.
@xbabu how did you fix this? We are getting the same error currently