[SUGGESTION] How about Add the targeted performance optimization of Key-Value Store collection support ?
Is your feature request related to a problem? Please describe. 100 billion Key-Value data high performance query and insert and update. im tested LiteDB V5(current version) the ops/s for insert about 100-200. and the Disk 100%. my pc is i5-6300hq (4core-4th) 2.3ghz RAM-20Gb 120g-ssd im use single thread for loop to insert.
public class CodeTask {
public CodeTask(string code, int taskId) {
Code = code; TaskId = taskId;
}
[LiteDB.BsonId]
public string Code;
public int TaskId;
}
public static void TestEntry() {
var sw = Stopwatch.StartNew();
CodeStore cs = new CodeStore();
int i = 0;
var LastProcessCnt = 0;
for (; i < 100000000;) {
i++;
cs.SaveCodeBindTaskId($"1234567890abcd{100000000 - i}", i);
if (sw.Elapsed.TotalSeconds >=2) {
var cnt = (i - LastProcessCnt);
Console.WriteLine($"i{i} cnt{cnt}avg{(cnt / 2)}op/s");
LastProcessCnt = i;
sw.Restart();
}
}
Console.WriteLine("done");
}
Describe the solution you'd like add a special collection ,what it targeted performance optimization for Key-Value Store.
Describe alternatives you've considered i considered use Microsoft's Faster,but it's almost pure In-Memory.Therefore, it is not practical to a large extent. When storing a simple Key-Value, a large amount of memory is required.
Additional context Add any other context or screenshots about the feature request here.
i found the problem. when use connection=shared insert will be very slow or when using var db= new LiteDatabase("Filename=TestDB.ldb;"); every time,very slow.
but,2000 ops/s also not fast enough.
You can drastically speed it up by doing bulk inserts as well. Break those into groups of 1,000, or 10,000 record inserts. //Where bulkRecords is a List<Object> bulkRecords col.InsertBulk(bulkRecords);
yes,im using this way, but litedb not stable. and its take much disk space. the up data like
[LiteDB.BsonId]
public string Code;
public int TaskId;
{Code=$"1234567890abcd{100000000 - i}",
TaskId=i}
will take about 150 bytes every record
If space is an issue, perhaps you might try using SharpZipLib or something ahead of putting a data record in the db (compress the data and then add the compressed data to the LiteDB record). Then when retrieving data, obviously would need to decompress the record data. This is going to affect performance though, and probably only really beneficial if adding heavily compressible data/bson objects.
If space is an issue, perhaps you might try using SharpZipLib or something ahead of putting a data record in the db (compress the data and then add the compressed data to the LiteDB record). Then when retrieving data, obviously would need to decompress the record data. This is going to affect performance though, and probably only really beneficial if adding heavily compressible data/bson objects.
the data is only
{Code=$"1234567890abcd{100000000 - i}",
TaskId=i}
soo,the problem is LiteDB's index or store logic,its take much space and can't do compress i want to store large data 10 billion+ count.