akka-persistence-postgres
akka-persistence-postgres copied to clipboard
[#155] New table: journal_persistence_ids
This PR introduces a new metadata table to hold journal specific information, as described in issue #155.
Changelog:
- New table
journal_persistence_idsand trigger onjournalinserts that auto-populates it. journal_persistence_idsnew table name and column names are configurable like the other tables.- Using the new table on some queries, both on the
JournalQueriesandReadJounalQueries. Although some cases pointed in 49a552f were not yet addressed. - Migrations scripts that take care of the creation of the new table and the associated triggers/functions.
- Migration script that populates the
journal_persistence_idstable with existingjournaldata. - Updates to the schemas being used on the tests, both on the
coreandmigrationartifacts.
Build is failing due to the following:
- tests defined on
PartitionedJournalSpecTestCasestrait (PostgresJournalSpec.scala) - tests defined on
JournalSequenceActorTestabstract class (JournalSequenceActorTest.scala)
What these tests have in common is concurrent inserts on the Journal with generated sequence numbers. Then, since the inserts are not sequential, an event with smaller sequence_number will come after one with a bigger sequence_number, activating the new check introduced in this PR, that prevents inserts on the Journal if the sequence_number is smaller than the maximum that is already stored.
Not entirely sure how to solve these tests, keeping what they were initially trying to test. For instance, the failing tests defined by PartitionedJournalSpecTestCase are easily solved if we do this:
// replace this
Future
.sequence {
senders.map { case (sender, idx) =>
Future {
writeMessages((idx * batchSize) + 1, (idx + 1) * batchSize, perId, sender.ref, writerUuid)
}
}
}.futureValue(Timeout(Span(1, Minute)))
// with this
Future {
senders.map { case (sender, idx) =>
writeMessages((idx * batchSize) + 1, (idx + 1) * batchSize, perId, sender.ref, writerUuid)
}
}.futureValue(Timeout(Span(1, Minute)))
However, it kind of defeats the purpose of the test.
Build is failing due to the following:
1. tests defined on `PartitionedJournalSpecTestCases` trait ([PostgresJournalSpec.scala](https://github.com/SwissBorg/akka-persistence-postgres/blob/master/core/src/test/scala/akka/persistence/postgres/journal/PostgresJournalSpec.scala)) 2. tests defined on `JournalSequenceActorTest` abstract class ([JournalSequenceActorTest.scala](https://github.com/SwissBorg/akka-persistence-postgres/blob/master/core/src/test/scala/akka/persistence/postgres/query/JournalSequenceActorTest.scala))
The name of the test is store events concurrently without any gaps or duplicates among ordering (offset) values. From what I understand, this suite is testing only the offset/ordering value, that is its unicity.
Without the added trigger check, the constraint was weaker : it used to verify that the pair (persistence_id, sequence_nr) was unique, but the new check also verify the order of insertion.
The test should be updated to account for that: concurrent writes across persistence_id but sequential for each one. In that case there is only one (val perId = "perId-1"), adding a second one would be interesting to not have a trivial case.
@mkubala What do you think?
@lomigmegard and @mkubala 👋
Commit 9adf419 solves the tests issues and makes the build pass. Not sure if the tests objective still remains though 👐