graphql-engine
graphql-engine copied to clipboard
Queries that return lots of rows cause out of memory error
Queries that return lots of rows cause out of memory error. The workaround is to fetch chunks of the desired rows in batches on the client and concatenate the results.
Version Information
Server Version: 2.4.0
Please provide any traces or logs that could help here.
{
"errors": [
{
"extensions": {
"internal": {
"statement": "SELECT coalesce(json_agg(\"root\" ), '[]' ) AS \"root\" FROM (SELECT row_to_json((SELECT \"_1_e\" FROM (SELECT \"_0_root.base\".\"field1\" AS \"field1\", \"_0_root.base\".\"field2\" AS \"field2\" ) AS \"_1_e\" ) ) AS \"root\" FROM (SELECT * FROM \"public\".\"test\" WHERE ('true') ) AS \"_0_root.base\" ) AS \"_2_root\" ",
"prepared": true,
"error": {
"exec_status": "FatalError",
"hint": null,
"message": "out of memory",
"status_code": "54000",
"description": "Cannot enlarge string buffer containing 1073741746 bytes by 92 more bytes."
},
"arguments": [
"(Oid 114,Just (\"{\\\"x-hasura-role\\\":\\\"admin\\\"}\",Binary))"
]
},
"path": "$",
"code": "unexpected"
},
"message": "database query error"
}
]
}
This is a security issue.
This issue means anyone can DoS (Denial of Service) a Hasura service by simply crafting a specific query. I've been encountering the same problem and I'd be okay with Hasura denying/rejecting such requests rather running out of memory which brings down the whole service.
This need to be prioritized.