api
api copied to clipboard
leveldown performance analysis
Here is a comparison of runtime of leveldown's performance on its benchmarks and tests (total wall clock time) between the V8 API implementation and the NAPI implementation.
This is using x64 release builds of node.js and leveldown, running on
- Windows 10 10586
- Intel Xeon E5-1620 @ 3.60 GHz
- 16 GB 1600 MHz DDR3 RAM
- Kingston SHPM2280P2 240 GB SSD
Node.js and leveldown are built from these commits:
- node.js https://github.com/ianwjhalliday/node/commit/0d92cf596289f5b4603d764e3e4ae6626aca76e0
- leveldown-napi https://github.com/ianwjhalliday/leveldown/commit/1c4060028dfcd634326a32e759d3d7c2e64c88bb
- leveldown-v8 https://github.com/ianwjhalliday/leveldown/commit/ef05005754f28c9cbaa26155b362c00c337e5b57
Each test was run three times. Raw data here https://gist.github.com/ianwjhalliday/236bdb53448a372536793580c0882197
Averaged results:
Perf Test | leveldown-v8 | leveldown-napi | Delta |
---|---|---|---|
db-bench.js | 61 sec | 62 sec | 0% |
write-random.js | 170 sec | 170 sec | 0% |
write-sorted.js | 95 sec | 100 sec | 5% |
tests | 30 sec | 66 sec | 120% |
db-bench.js
and write-random.js
appear to perform equally well, while write-sorted.js
appears to have become slightly slower. The test suite is taking significantly longer, over twice as long.
These are interesting results that suggest we are correct to believe that performance is only hindered in very frequent calls from JavaScript code into native module code. I have not verified but suspect that the benchmark tests are exercising LevelDOWN's internals and LevelDB itself, rather than LevelDOWN's API layer.
So in the case of the benchmarks it appears the overhead of NAPI is insignificant relative to the workload LevelDB is handling, whereas in the case of the tests NAPI is significant presumably because the tests are focused on exercising the API that LevelDOWN exposes.
We currently know of two areas where our NAPI prototype has room to improve:
- creating constructor functions (e.g. Database, Iterator, Batch) currently does not take advantage of v8's
FunctionTemplate
optimization - throwing exceptions with simple text message is a chatty operation requiring three NAPI calls (
napi_create_string
,napi_create_error
,napi_throw
)
Next thing I will do is whip up an API for creating a constructor with methods using a v8::FunctionTemplate
properly and see how that changes these numbers. I expect this will make a large difference. Next I'll add an API to create and throw a new error from a text message in one API call. I expect this to have a minor to nil effect on performance but will try it since it will be easy and quick. Finally after that if there is still a gap I will do profiling to see where time is being spent.
I will also get timing numbers for x86 release builds sometime this week.
Getting the API to use v8::FunctionTemplate
improved the numbers significantly for the test suite. Benchmarks remained the same, including the 5% slow down write-sorted.js
.
Updated gist with raw numbers https://gist.github.com/ianwjhalliday/236bdb53448a372536793580c0882197#file-leveldown-perf-results-raw-with-create-constructor-improvement
Average test time is 37 seconds bringing the napi version down to only 23% slower than the V8 version.
Relevent node.js and leveldown commits that include the new API for this improvement are:
- node.js https://github.com/ianwjhalliday/node/commit/8b8c86cab391457c333b97c813f5157392b06e69
- leveldown-napi https://github.com/ianwjhalliday/leveldown/commit/ae53bd6a8cebf0972c6d096ac87e7ff6201d1203