Bulk operations
This RFC aims to add new top level mutations for each entity. This would allow to mutate multiple top-level entities using a single mutation field.
This RFC covers only top level mutations, batch operations in nested mutations (on relations) are out of scope.
We distinguish two kinds of bulk operations. You either modify known number of entities - we call this "multi" operations, or you modify entities specified by a filter - we call this "batch" operations.
Multi operations
Multi operations are simpler and more straightforward than batch operations, because it usually only wraps arguments of individual sub-mutations into an array.
Currently, you can execute equivalent operations in single mutation using multiple aliased sub-mutations. e.g.
mutation {
article1update: updateArticle(by: {id: ...}, data: ...) { ok }
article2update: updateArticle(by: {id: ...}, data: ...) { ok }
}
Multi operation API will allow you to pass all the operations in a single argument containing an array:
mutation {
multiUpdateArticle(data: [
{by: {id: ...}, data: ...},
{by: {id: ...}, data: ...},
]) {
ok
}
}
Considered types of multi operations:
- create
- update
- delete
- upsert
All nested (relation level) mutations supported in simple operations are supported in multi operations.
Batch operations
Batch operations allows modifying many entities by providing a filter. It is currently not possible to do equivalent operation within a single mutation. You must first fetch matched entities and execute mutation later.
Only batch update and batch delete makes sense for batch operations. Also, set of available nested mutations is limited. Either because some operations make no sense or their implementation is limited by current supported set of nested mutations.
Batch deletes
This operation will accept a filter and all matching entities will be deleted.
mutation {
batchDeleteArticle(filter: {archivedAt: {isNull: false}}) {
ok
nodes {
id
}
}
}
We should consider some a safety check to avoid deleting all entities by mistake by providing an empty filter.
Batch updates
This operation accepts a filter and data with modifications of all matched entities.
Supported nested mutations by relation type:
One-has-one owning and one-has-one inverse:
- ✓ delete
- ✓ disconnect
- ✓ update
- ✓ create
- ✗ connect - an entity is connected using unique identifier, so you cannot connect one unique entity to multiple places
One-has-many:
- ✓ create
- ✗ delete - requires unique identifier
- ✗ disconnect - requires unique identifier
- ✗ update - requires unique identifier
- ✗ connect - requires unique identifier
Many-has-one:
- ✓ delete
- ✓ disconnect
- ✓ update
- ✓ create
- ✓ connect
Many-has-many (both sides):
- ✓ disconnect
- ✓ connect
- ✗ delete - requires unique identifier
- ✗ update - requires unique identifier
- ? create - possible, but does it make sense?
Result object
Similar to node field in simple mutations, all result objects contain a field nodes with mutated entities. In multi operations, entities are sorted to match the order of input. In batch operation, order of entities is undefined.
Error paths in response are prefixed with a single _IndexPathFragment.
Transactions
Every sub-mutation is executed in a transaction.
Backward incompatible changes
Introducing many new GraphQL types may introduce name collisions (similar to known Meta suffix name). Implementation should consider this and should try to avoid possible collision.
Other Contember features impact
Bulk operations should work with other Contember features, including validations, ACL, event log or actions (#315).
Future scope
- we might consider support more nested operations. This would require batch-like operations on nested level
Discussion
- proposed terms are "multi" and "batch". Do you have better idea? Isn't it confusing?
- are there any top level bulk operations missing here?
- what do you think about "delete all safety check" mentioned above?