client
client copied to clipboard
cannotModifyImmutableAttributeError should give me document id
Is your feature request related to a problem? Please describe.
Using the JavaScript client, I create a transaction and push a lot of documents with createOrReplace()
. Sometimes I get a Cannot modify immutable attribute "type" error from .commit()
, and while I understand why it happens, the error does not have enough detail to tell me which of the many documents is causing the problem. Could this error object be extended a bit to add a documentId
property or similar?
Describe the solution you'd like
"error": {
"attribute": "_type",
"description": "Cannot modify immutable attribute \"_type\"",
"type": "cannotModifyImmutableAttributeError",
"documentId": "fooo_id"
}
I do a simple test to compare the genjidb and sqlite, here is the result:
write & read performance
genjidb total_write_size: 1301.977 MB, cost: 2.286s
genjidb total_read_size: 1301.977 MB, cost: 474.471211ms
sqlite total_write_size: 1301.977 MB, cost: 7.697s
sqlite total_read_size: 1301.977 MB, cost: 955.562619ms
# sqlite with batch commit when write data
sqlite total_write_size: 1301.977 MB, cost: 3.824s
sqlite total_read_size: 1301.977 MB, cost: 1.115740125s
genjidb is a little bit faster than sqlite.
data compression
▶ du -h profile-data-file --original profile data directory
1.3G profile-data-file
▶ du -h /tmp/badger -- genjidb (badger) data directory
1.3G /tmp/badger
▶ du -h foo.db --sqlite.
1.3G foo.db
@crazycs520 Doing a benchmark is a good attempt, however it does not help answering the problem I raised:
-
The workload you are testing with is not the real world workload. Conprof never continuously write bulk profiling data. For example, there is no difference when writing takes 2s or 20s, as the Conprof only writes at a 1 minute interval.
-
Even with the real world workload, the duration metric is trivial when other important aspects are not considered. For example, I can implement a simple db that beats genjidb and sqlite with a 0.0001s write latency totally, by simply performing the write to a memory buffer. The following questions should be at least checked:
- What is the crash assurance for the genjidb? What will happen when you performed a write and then the power is lost? Will there be data lost or corruption?
- How is the memory consumption?
- How they performs when this process is running for long time?
- How is the stability?
- There are also other aspects needs to be evaluated when comparing different solutions. Named a few:
- Code quality
- Feature sets
- The behavior for our future possible workloads
- etc.