cr-sqlite
cr-sqlite copied to clipboard
Is this project dead?
the creator of the project is on a full time job that's why releases are slow
also look at Elm it been since 2019 and no updates yet the creator still working on it and showed it recently
I'm full time on https://zero.rocicorp.dev/ these days.
When that project becomes more mature I'll have more bandwidth again but I don't see that happening for at least 1-2 years (https://zero.rocicorp.dev/docs/roadmap).
Fly uses it to share state across 1000s of machines. There are probably more production system that use cr-sqlite as a daily driver.
It's a software project. Software is not alive or dead. Software either work or it doesn't. Lack of recent commits doesn't mean that code stops working.
We do still use this, yes. Our fork currently has a significant change: repurposed db_version to be monotonically, and serially, incrementing by 1, per site_id. That means we can determine what "gaps" we have in our crsql_changes without an additional table to map db_version to the original version.
We share 7.5 million rows globally. The cluster handles over 300K operations per second, in aggregate, with a p99 "time to replication" of ~1 second (that's how long it takes for a commit on one node to be durably replicated to the other ~1000 nodes). It's tested with Antithesis.
This is all via our corrosion project. I originally created it, but it's now mostly maintained by @somtochiama.
We do still use this, yes. Our fork currently has a significant change: repurposed
db_versionto be monotonically, and serially, incrementing by 1, persite_id. That means we can determine what "gaps" we have in ourcrsql_changeswithout an additional table to mapdb_versionto the original version.
Thanks for sharing your experience, it’s great to see crsqlite getting some serious use!
I'm still not quite sure what the purpose of these db_version changes in your fork are. Is it a bug fix or an optimisation that would be useful in crsqlite generally, or is it specific to your use case?
I currently sync by managing the tracked_peers table mentioned in the docs, and haven’t noticed any gaps or issues. Does your fork eliminate the need for this table?
It is optimised for our use case but might be useful to others. When syncing a large number of changes with several nodes, you might want to track and request just the changes that you've missed from a particular node. With the current db_version, you may be requesting changes you've already seen, as the db_version increments for every change the node receives, especially if you are syncing between different nodes.
The flip side is that it makes tracking a little more complex. In corrosion, we have a separate table that stores the ranges of versions that we are missing from each node (and the max db_version from each node in crsql_db_versions table).
I'm still not quite sure what the purpose of these db_version changes in your fork are. Is it a bug fix or an optimisation that would be useful in crsqlite generally, or is it specific to your use case?
I currently sync by managing the tracked_peers table mentioned in the docs, and haven’t noticed any gaps or issues. Does your fork eliminate the need for this table?
It's specific for a use case where you can exchange changes from any node, with any node. When a change is created, Corrosion disseminates it to 90%+ of nodes by broadcasting is and rebroadcasting it on its mesh topology network. There's also a sync fallback process that runs quite often and is more deterministic (a node tells another node every db version it is missing from all the nodes it knows about, as well as the latest version is knows from every node).
This removes a single-point of failure. As long as a node has been able to share a changeset with another node, the change is going to be available to the whole cluster "eventually" (likely within a couple seconds at most).
I would like to point out our sqlite-sync project: https://github.com/sqliteai/sqlite-sync