Array serialization does not work with custom array types
Consider the following schema:
CREATE TABLE my_table (my_array text[]);
If you run:
sql.unsafe(`insert into my_table (my_array) values ($1)`, [['item']]);
...it works as expected.
Now instead of using a built-in type, change the array to be a custom array type:
CREATE DOMAIN my_custom_array_type AS text[];
CREATE TABLE my_table (my_array my_custom_array_type);
Running the same code:
sql.unsafe(`insert into my_table (my_array) values ($1)`, [['item']]);
...now fails with the following error:
Uncaught PostgresError: malformed array literal: "item"
There appears to be an issue with serializing arrays, but only for custom array types.
Why would you want to use such custom types? One possible reason is if you want to constrain the length of the array, e.g.
CREATE DOMAIN my_custom_array_type AS text[]
CONSTRAINT my_custom_array_type_check CHECK ((array_length(VALUE, 1) <= 3)) // At most 3 elements
It appears to have something to do with how we're defining the serializers for these custom types: https://github.com/porsager/postgres/blob/364c3ebee57f3a7ce1fc36d5857b574ee72e507c/src/connection.js#L725-L736
Just merged a PR that should fix this https://github.com/porsager/postgres/pull/578
Can you check how that works?
Just merged a PR that should fix this #578
Can you check how that works?
They probably still have to make their own parser/serialiser for it.
Just merged a PR that should fix this https://github.com/porsager/postgres/pull/578
Can you check how that works?
I still receive the same error with that change.
They probably still have to make their own parser/serialiser for it.
I might be misunderstanding, but this feels like an example where a custom parser/serializer shouldn't be necessary. In the example, my_custom_array_type is effectively an alias of text[], so it should already work with the built-in serializers.
Perhaps there's something about the query in fetchArrayTypes that isn't making that association work?
Here's a test that reproduces the failure:
t('Insert array into custom domain array', async() => {
await sql`create domain my_custom_array_type as text[]`
await sql`create table test (x my_custom_array_type)`
await sql`insert into test ${ sql({ x: [['a']] }) }`
})
I guess it would be possible, but not sure how it should be implemented.
You can fetch the base type this way (there is probs an easier query I just edited the current one):
SELECT
DISTINCT ON (b.oid) b.oid,
b.typarray,
CASE
WHEN b.typbasetype > 0
THEN c.oid
ELSE NULL
END AS baseoid,
CASE
WHEN b.typbasetype > 0
THEN c.typarray
ELSE NULL
END AS basetyparray
FROM
pg_catalog.pg_type a
LEFT JOIN pg_catalog.pg_type b ON
b.oid = a.typelem
JOIN pg_catalog.pg_type c
ON
c.typarray = b.typbasetype
OR c.oid = b.typbasetype
WHERE
a.typcategory = 'A'
GROUP BY
b.oid,
b.typarray,
c.oid,
c.typarray
ORDER BY
b.oid
@porsager any ideas?
This query might help in case somebody wants to write a custom type serializer/parser:
WITH cols AS (
SELECT
pg_catalog.format_type(t.oid,null) tuple,
t.oid as oid,
t.typarray as array_oid,
a.attname column_name,
a.attnum column_num,
a.atttypid as column_oid
FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_type t ON (a.attrelid = t.typrelid)
JOIN pg_catalog.pg_namespace n ON (n.oid = t.typnamespace)
WHERE n.nspname = current_schema() AND a.attnum > 0 AND NOT a.attisdropped
)
SELECT
cols.tuple,
cols.oid,
cols.array_oid,
array_agg(cols.column_name ORDER BY cols.column_num ASC) column_names,
array_agg(cols.column_oid ORDER BY cols.column_num ASC) column_oids
FROM cols
GROUP BY 1, 2, 3;