nakama
nakama copied to clipboard
Typescript: SQL 'value' output is unreadable
We're executing a sql query to get all storage objects of a certain collection. We're using the sql query to bypass the arbitrary hardcoded limit where nakama returns an error if requesting a list of more than 100. I think this should be removed, it's frankly the source of many of our issues and it should be up to the developer to decide what list size is too big. I know we can use a cursor and grab everything but doesn't that amount to many queries? My understanding is that one big query is cheaper than many small ones... but I digress.
We're getting a bizzare output for "value" which is unreadable. The rest seems fine:
rows = nk.sqlQuery(query, parameters);
return JSON.stringify({ success: true, servers: rows });
{ "user_id": "00000000-0000-0000-0000-000000000000", "write": 0, "update_time": "2022-06-09T19:52:30.476506Z", "collection": "servers", "key": "whatever", "value": [ 123, 34, 71, 97, 109, 101, 77, 111, 100, 101, 34, 58, 32, 49, 44, 32, 34, 83, 101, 114, 118, 101, 114, 78, 97, 109, 101, 34, 58, 32, 34, 66, 114, 101, 110, 100, 101, 110, 34, 125 ], "version": "745f2988787925a473d1589c25934d14", "read": 1, "create_time": "2022-06-09T19:52:30.476506Z"
When I look up the storage item in nakama it reads like this:
{
"GameMode": 1,
"ServerName": "Brenden"
}
Any idea why the sql query is returning gibberish? Are you guys doing anything fancy here? I looked at source and didn't spot anything. The database schema also showed that 'value' is stored as JSONB so I would have expected to get the exact parseable JSON here unless there's a bug going on.
Thanks, Brenden
JSONB columns store the underlying data as binary, so what you're seeing is a raw array of bytes. If you convert that to a string you'll see it reads as {"GameMode": 1, "ServerName": "Brenden"}
.
It's not quite as simple as fewer queries leading to better performance, it's a balancing act of one or more of: query complexity, shape of the data on disk, other database activity, table and row lock lifecycle, amount of data returned, length of time the connection is held by any one process, and so on.
All that aside, until you're in production and actively observing problems (or have measured problems in a production-like load testing environment) I highly recommend using Nakama's built-in functions to do this. The pagination approach is tried and tested with large scale production workloads so I'm very interested to learn more about your use case and what issues it's creating for you. You say the pagination requirement for certain APIs is the source of many of your issues - can you go into a bit of detail about what these issues are?
I think in the absence of any new information we can consider this issue resolved by the comment above.
To stress for any other developers finding this issue: we do not recommend writing custom SQL queries to interact with the database if at all possible.