Jan Novotný
Jan Novotný
Currently the `inScope` constraint is not supported in `extraResults` of GraphQL / REST API even if the basic evitaQL allows it. This means we're not able to specify query for...
This issue is a part of larger issue #503. It introduces default conflict resolution strategy on entity level. So that no two concurrent transactions may upsert/remove same entity. Also no...
The problem can be seen in test `io.evitadb.api.EntityFetchingFunctionalTest#shouldEagerlyDeepFetchReferenceEntityBodiesFilteredAndOrderedViaGetEntity`, when following query is used: ``` final SealedEntity product = session.getEntity( Entities.PRODUCT, productsWithLotsOfStores.keySet().iterator().next(), referenceContent( Entities.STORE, filterBy( entityPrimaryKeyInSet(filteredStores), entityLocaleEquals(LOCALE_CZECH), entityHaving( and( entityPrimaryKeyInSet(filteredStores), entityLocaleEquals(LOCALE_CZECH)...
Viewstamped replication protocol requires commits to be acknowledged by majority of nodes. The problem with evitaDB is that might be quite memory hungry and keeping at least three full replicas...
Current transaction engine must support new mode - internal WAL must handle transactions in two states: PREPARED / COMMITTED. First it appends WAL in PREPARED state and waits until it...
Client needs to be able to use different endpoints for read only and for read write sessions / management API. When connection establishing fails, it should execute internal retry logic...
When new replica starts it needs to get up to date. We expect that the exporting facility of the evitaDB in a cluster environment will be configured to S3 endpoint...
Currently all backups are "manual" - or require external API calls. This feature should allow configuring scheduled backups in regular intervals / or certain amout of "work". This rules are...
Since we're planning to support single-writer, multiple-readers setup we need to handle a situation when a client wants to open read-write session and there are multiple pods from which only...
Currently we support exporting files (backups and all other files) to a local filesystems. When running in a cluster environment it's generally recommended to store data to a network operating...