postgrest
postgrest copied to clipboard
A relational insert
Having a (simplified) model with a many to many relation like so
customers | |
---|---|
id | uuid |
first_name | text |
last_name | text |
banks | |
---|---|
id | uuid |
name | text |
customer_bank_accounts | |
---|---|
account_number | bigint |
customer_id | uuid |
bank_id | uuid |
Now lets assume that in my application I have a form that lets you add a new customer. On that form, besides creating a new customer I also want to immediately assign him to a known bank. Of course I could just: (in pseudo code)
-
newCustomerId = POST Prefer: return=representation /customers?select=id { "first_name": "John", "last_name": "Doe" }
-
POST /customer_bank_accounts { "account_number": 123456788901234556152, "customer_id": ${newCustomerId}, "bank_id": "some_preexisting_id_of_a_bank_that_the_user_selected"
However this of course is not trasactional and the problem here is that if step 2 fails I would like to rollback and discard the creation of a customer. But boom tss, the customer has already been created!
Is there an api that would let me to do this like so?
POST http://localhost:3000/customers?select=*,customer_bank_accounts{account_number, bank_id}
{
"first_name":"John",
"last_name":"Doe",
"customer_bank_accounts":[
{
"account_number":5851859590151,
"bank_id":"some_preexisting_id_of_a_bank_that_the_user_selected",
"customer_id": "$parent_id????"
}
]
}
From the docs I figure that the only way to achieve such would be to manually write a procedure and then call it via rpc, would that be correct?
Yes, that is correct, for now, RPCs are the only* way to do that. It might be also possible to have the same effect using views and triggers. for example if customers is a view and it has a column "customer_bank_accounts" which is a json, it might be possible for you to write an trigger that will allow you to do a POST to this view in the form that you specified above. I have no specific example but i remember someone implementing just this in one of the issues.
@ruslantalpa Eh thats a bit of a bummer. Both solution sound to me more compilicated than it should be. There really needs to be a streamlined way of doing transactions, just as there is an easy and streamlined way of getting your data. But reading through other issues I see that there are already some ideas to implement transactions. Guess I will have to wait for them before adopting postgREST.
I have a proposal for this. Once #690 is solved, we could take advantage of the new query parameter(columns
) to do this:
-- having these tables
CREATE TABLE items (
id serial PRIMARY KEY,
name text NOT NULL
);
CREATE TABLE subitems (
id serial PRIMARY KEY,
name text NOT NULL,
item_id integer REFERENCES items(id)
);
POST /items?columns=name,subitems(name)
{
"name": "item 1",
"subitems": [
{"name": "subitem 1"},
{"name": "subitem 2"}
]
}
This would generate the following SQL(using data-modifying CTEs):
WITH
payload AS (
select '{"name": "item 1", "subitems": [{"name": "subitem 1"}, {"name": "subitem 2"}]}'::json as data),
ins1 AS (
insert into items(name)
select
name
from json_populate_record(null::items, (select data from payload)) _
returning id AS item_id)
insert into subitems(name, item_id)
select
name,
(select item_id from ins1) as item_id
from json_populate_recordset(null::subitems, (select data from payload)->'subitems') _;
Since we know the tables relationship, we can infer that the item_id
column is the fk column for subitems
. This would work in a similar way for a M2M relationship(would have one more insert cte) and I think the query could be arranged to allow inserting many parent/child rows at the same time(by using pg json_array_elements
).
This would be a major feature and requires a fair amount of work, so I'll require sponsorship to implement it. For anyone that's interested in this see PostgREST's Patreon page or you could also contact me directly(email in profile).
The query gets more complicated when there's multiple parent records to insert with their childs.
But here's a query that works https://gist.github.com/steve-chavez/60473d1765b5175012f5cc15695ae0b1.
@ruslantalpa Perhaps you may have some feedback about that.
the direction is good however: to know all the CTEs in this query, you need to know all the levels in the payload, this means to generate this query as it is now, you need to inspect the payload in haskell and if that is the case and there is no way arround it then you might as well simplify the queries and have haskell do some work.
but again, the direction is good, maybe leave this part as it is and now focus on the following problem:
given a single json payload, how could it be split into multiple CTEs like payload_level_1
, payload_level_2
... and since we are splitting them, those CTE outputs might have some kind of virtual PKs (so that items from levels below can reference those from the level above, a kind of FK) and these virtual PKs can be used when inserting in stages and somehow eliminate the problem of order.
maybe what I am saying is, it might be doable but it's not trivial so maybe first try to split the problem into a few smaller ones and solve each stage (you started from the last stage):
- given a payload, how many levels does it have (has to be done in sql, one could consider the primary keys and related tables as "know"). Is there some info that Haskell knows about the schema (without looking at the payload) that it can feed to SQL to make this task easier?
- knowing the number of levels, is there a way to recursively create a CTE for each level?
- once one has CTEs 1/2/3 is it possible to have virtual PKs in them linking the items/rows between levels
- the last stage, recursively (based on the number from level 1) generate an insert for each level that uses the CTEs from 2.
the above is "brainstorming" around this feature, trying to split the problem into stages
to know all the CTEs in this query, you need to know all the levels in the payload, this means to generate this query as it is now, you need to inspect the payload in haskell
Isn't that information in the query string?
@ruslantalpa With the new ?columns
query arg we'd have what we need to know the levels without looking at the payload.
For example for inserting items + subitems + pieces, querystring would be:
POST /items?columns=name,subitems(name,pieces(name))
[
{"name": "item 1", "subitems": [
{"name": "subitem 1", "pieces": [{"name": "piece 1"}, {"name": "piece 2"}, {"name": "piece 3"}]},
{"name": "subitem 2", "pieces": [{"name": "piece 4"}, {"name": "piece 5"}, {"name": "piece 6"}]},
{"name": "subitem 3", "pieces": [{"name": "piece 7"}, {"name": "piece 8"}]}]},
{"name": "item 2", "subitems": [
{"name": "subitem 4", "pieces": [{"name": "piece 9"}, {"name": "piece 10"}]},
{"name": "subitem 5", "pieces": [{"name": "piece 11"}]}]}
]
Besides that, looking at the schema cache is necessary to complete these parts:
json_to_recordset((select data from payload)) as (name text, subitems json)
Because of name text
, we need to know the type of the column.
Edit: added some comments in the gist.
Ok, this kind of takes care of 1,2,4 So is it possible in 3 to generate some kind of virtual PKs so as to not rely on the returned order
That's what I do with ordinality
in the gist, those are the virtual pks. Though relying on the order is needed on the INSERT .. RETURNING
parts.
For an example of what...
Yes, that is correct, for now, RPCs are the only* way to do that.
...from this comment looks like, check out the repo I made with a demo of how to build an RPC:
https://gitlab.com/tomsaleeba/postgrest-rpc-complex-insert-demo
Be warned, there was a lot of learning PL/pgpsql as I went to get that to work, so it's probably not pretty.
Hello, is there any update on this feature ?
@tomsaleeba I'll be using your example solution until this feature is implemented, but I can't help but think that there is a faster way of inserting than using a foreach loop.
Actually I spent some time and re-wrote the body of your function to avoid using a foreach loop:
CREATE OR REPLACE FUNCTION chicken_aio_rt2(coop coop, chickens chicken[])
RETURNS chicken_aio_type AS $$
DECLARE
new_coop_id int;
new_chicken_ids int[];
BEGIN
RAISE WARNING 'coop %, chickens %', coop, chickens;
INSERT INTO coop (colour, built) VALUES(
coop.colour,
coop.built
) RETURNING coop_id INTO new_coop_id;
WITH created_chicken_ids AS (
INSERT INTO chicken (name, is_laying, coop)
SELECT name, is_laying, new_coop_id FROM (
SELECT * FROM unnest(chickens)
) AS subquery
RETURNING chicken_id
)
SELECT array_agg(chicken_id) INTO new_chicken_ids
FROM created_chicken_ids;
RETURN (new_coop_id, new_chicken_ids);
END
$$ LANGUAGE plpgsql;
I have not benchmarked or measured the difference between the two implementations yet, however this to me seems to me a little bit nicer.
Some higher level thoughts on this issue. The "relational insert" compared to a "regular insert" is similar to "embedded resources" vs. a "simple resource".
In #2144 we are discussing the extension of embedding through computed/virtual columns through functions. Those can provide the same functionality as the auto-detected embedding.
What if we turned that thing around... and provided a way to create functions that supported inserting into a separate table from one request?
Something roughly along the lines of:
create table clients (
client int primary key generated by default as identity,
name text
);
create table projects (
project int primary key generated by default as identity,
name text,
client int references clients
);
-- this would currently be auto-detected anyway and is just for demonstration
create function client(projects)
returns setof clients
rows 1
language sql as $$
select * from clients where client = $1.client
$$;
create function client(projects, clients)
returns projects
language sql as $$
insert into clients (name)
select $2.name
returning $1.project, $1.name, clients.client
$$;
which would then allow to do something like:
POST /projects?columns=name,client(name) HTTP/1.1
{ "name": "New Project", "client": { "name": "New Client" } }
PostgREST would parse the client(name)
in the columns parameter. It would then look for a function called client
with the signature (projects, <any composite>) -> projects
and use that in a query roughly like the one proposed in https://github.com/PostgREST/postgrest/issues/818#issuecomment-409981816.
I'm not sure whether the function signature like will actually work nicely, but I guess it's close.
Those functions would not have to be used for "relational" inserts, however. We can think of them more generalized as "computed setters" vs. "computed getters" (computed/virtual columns).
Tangentially related; it's very common in crud applications to want to update M2M relations where the foreign table records already exist (so only the join table needs to be updated). Is this already being considered? (if so, dare I ask, is some solution to this being worked on currently?)
example:
POST /projects?columns=name,clients(id) HTTP/1.1
{ "name": "New Project", "clients": [1, 2, 3] }
So you would need to think about nested updates as well for inserts, although your CASCADE settings should take care of most of this. Here is how Hasura does it:
https://hasura.io/docs/latest/mutations/postgres/insert/#pg-nested-inserts
J
any updates on it ?
@wolfgangwalther - How would this work with a PUT
or update?
J
@wolfgangwalther - How would this work with a
PUT
or update?J
I think that would just be a matter of adding some ON CONFLICT DO UPDATE
and DELETE
statements to client(projects, clients)
to make it capable of doing all kinds of upserts.
@wolfgangwalther - How would this work with a
PUT
or update? JI think that would just be a matter of adding some
ON CONFLICT DO UPDATE
andDELETE
statements toclient(projects, clients)
to make it capable of doing all kinds of upserts.
Isn't there already a client(projects, clients)
function though for the insert?
J
Isn't there already a client(projects, clients) function though for the insert?
I don't think there is a difference between update or insert for the relational part.
The main entity needs to be either created - or it already exists. But the related entities must always be "made to match". This is simple in the insert case, because there are certainly no related entities to be deleted or updated. But the more complex query for the update case won't hurt in the insert case either.
So I think it should be possible to use the same client(projects, clients)
for both cases, if it is properly written.
@steve-chavez Is the design for this feature completed or is it still in discussion?
So I think creating the virtual column mutations would definitely be the first step here, and could potentially be a work around for transactions.
For a trivial example with tags:
posts
- post_id
...
tags
- tag_id
- name
...
post_tag
- post_id
- tag_id
...
- pk (post_id, tag_id)
How do you add a post with tags?
{
'title': 'my post',
'content': 'blah, blah, blah...',
'post_tag': [
tags: [{
name: 'hiking',
}, {
name: 'swimming'
}]
]
}
If you can even visualize this correctly, this gets complicated real quickly. In Dgraph, Prisma, Hasura, or any other ORM, you don't actually think about the junction table (or the relationship properties). So it would be:
{
'title': 'my post',
'content': 'blah, blah, blah...',
'tags': [{
name: 'hiking',
}, {
name: 'swimming'
}]
]
}
Which makes more sense conceptually.
However, we also have different problems to consider:
- If I add tags to a post, is it a set or an array?
- Should I delete all
post_tag
records first with thatpost_id
, then add the new ones? - I may not want to overwrite
tags
, just link to them - I may want to link to them, and create new ones (upsert)
- There will be different
on conflict
needs, with different RLSs - I will be updating posts (with tags --- and post_tag), not just inserting new ones
This is just a sample of problems to think about. So, for now, until postgREST figures out a good mental model, virtual mutation columns would definitely be a life saver. It would allow me to choose the nested update
or insert
methods, work with transactions, and would need to pass the newly created id
of the record (or existing id
in the case of update
).
Then I could just do:
{
'title': 'my post',
'content': 'blah, blah, blah...',
'tags': ['swimming', 'hiking']
}
With a computed column like:
create function tags(posts, text[])
returns post_tag
language sql as $$
-- delete existing post_tag
delete from post_tag
where post_id = $1.post_id;
-- upsert tags
insert_tags as (
insert into tags (name)
select unnest($2)
on conflict (name) do nothing
returning tag_id
),
-- insert new post_tag
insert into post_tag (post_id, tag_id)
select insert_tags.tag_id, $1.post_id
from insert_tags
returning *;
$$;
Which is what I really want. While this seems like a lot of work, it is so much LESS
work than creating an rpc
function that does this AND inserts the post(s) records at the same time, in bulk as well.
I don't want this as a final solution, but this would definitely help simplify my code for now.
Thanks,
J