unqlite icon indicating copy to clipboard operation
unqlite copied to clipboard

High memory usage

Open mardy opened this issue 6 years ago • 6 comments

Hi there! I'm running a performance comparison between several databases, using the ioarena benchmarking tool. I'm running the tool with the following parameters:

valgrind --tool=massif ioarena -m sync -D unqlite -v 2048 -B set -n 10000

What I found surprising, is that memory usage grows linearly, up to 83.6 MB, which seems quite a lot compared with other DB engines run in the same benchmark (upscaledb: 4.3 MB, sqlite: 4.0 MB, rocksdb: 27.8 MB). I wrote the unqlite driver for ioarena myself, so it might be possible the the problem is in the driver; however, running valgrind with the leak-check tool doesn't report any leaks (it appears that all the RAM is properly freed when the DB is closed).

Given that the FAQ states that the DB should also be usable in embedded devices, I wonder if such a high memory usage could be due to some bug.

mardy avatar Nov 23 '18 07:11 mardy

This is a cache related issue. Because nothing reach the disk surface until the handle is closed or a transaction is committed. You can purge the cache by manually committing the transaction via unqlite_commit() each time you reach certain threshold (i.e per 10K insertions).

symisc avatar Nov 23 '18 23:11 symisc

That doesn't seem to work: even if I run

valgrind --tool=massif ./src/ioarena -m sync -D unqlite -v 2048 -B batch -n 10000

which periodically calls unqlite_commit(), the memory usage is not decreasing.

mardy avatar Nov 26 '18 06:11 mardy

@mardy It sounds like you are seeing the same behavior I reported in issue #70. I ended up having to call unqlite_close() periodically to work around this.

kmvanbrunt avatar Nov 26 '18 06:11 kmvanbrunt

I wonder if insertions after calling unqlite_commit() will increase memory usage, or will it be kept in the previous 83.6 MB in the test?

If it will be kept in the previous allocated memory, then it's not much a deal, we just have to call unqlite_commit() more frequently?

DisableAsync avatar Sep 12 '19 10:09 DisableAsync

Yes, you have to understand that UnQLite keep some of the freed memory in a internal pool before releasing them to the OS. We do this so that successive read, write operations does no request new memory blocks again from the underlying OS which is very costly. To answer your question, unqlite_commit() will not increase memory usage, in fact it should release definitively memory of dirty pages synced to disk and keep the rest in the pool. So yes, you should call unqlite_commit() periodically lets say for example each 5 minutes or each 100K insertion. unqlite_close() on the other side will free all the allocated memory (you can use Valgrind to confirm this).

symisc avatar Sep 13 '19 02:09 symisc

There is an error in the reference count that frees the page object in the page_unref function. The original code first took the reference count and then released the reference count, resulting in a reference count that was not 0, so the Page page object was unable to free up excessive memory. nRef = pPage->nRef--; The right way to do this should be to solve the problem of excessive memory consumption. nRef = --pPage->nRef;

hekaikai avatar Mar 25 '20 09:03 hekaikai