cxx-http
cxx-http copied to clipboard
Explanation in README for performance vs node
I am super curious as to the details of why exactly the simple example in the README outperforms node.js. More than just the high level "it doesn't have to do as much".
I probably am not the only one that is wondering this after seeing the README at its current version. So I was thinking that some short explanation could be added or linked to in the README for those that are curious.
I know that the performance section says Without any real statistical significance but I'm not sure what to make of that. Is there a test we could run that we could stand by and say is "statistically significant"?
If an explanation isn't warranted on the README inline maybe we could add a wiki page and link to it on the README?
If anybody has any pointers or links that shed light on this I could take a crack at it but I don't have any experience with libuv.
Right, at a high level, it doesn't need to do as much. That is really the main thing. Marshaling objects between JS and C++ means determining types and creating intermediate representations of the program, garbage collection etc. The fewer abstractions there are between you and the socket, the faster things are going to be. So we should write everything in ASM. haha J/K.
I think C++14/17 (c++1y, c++1z) are really interesting, they converge on the functional affordances I like about lisp and javascript. So I think I'm going to start writing more software in C++ (again).
As for statistical significance, the tests I ran were with apache ab, which gives you the mean, for instance, i ran ab -c 10 -n 1000 0.0.0.0:8000/foobar against the server. You really need to run this test many times and then take the mean and average of that over the course of time on several machines and specify their hardware. There are lots of big-ass load testing frameworks for this but you could just as easily write something meaningful in bash.
The docs around libuv are improving like crazy! IMO libuv is a great lib, it's sparse, it's portable, it's intuitive. You can read the header files and they will tell you most of what you need to know about what interfaces it exposes, but there is also this. Even better is using nodejs itself as a reference for how to use libuv well.
Thanks for the response. I'd like to dig into the performance increase a little more and I think others would be curious as well. This has 5x more requests/sec and there might be takeaways from this that people could learn from ... maybe make an alternative and leaner JS server out of libuv with (not me though).
So we have 1) marshaling objects and 2) garbage collection as the two main reasons you mentioned for the performance increase. I wonder if it would be possible to isolate just the garbage collection (especially because that is a product of the language, and something that would be harder to change)? Maybe turn it off on a high mem machine? I wouldn't know whether that is possible.
The main thing I'm wondering right now is how much of this speedup is because of JS -> C++ and how much is because of the specific implementation of node.
Rather, in this case there is no marshaling of objects and no garbage collection since there is no Javascript in this project :) Having a leaner Javascript server would have diminishing returns. Javascript presents an awesome amount of convenience and that is what you are paying for. In most cases node is awesome. Ultimately everything has a cost, usually it's a split on performance and convenience.
Ok, it sounds like your conclusion is that most if not all of the speedup is because of the language change, and that there probably isn't any particular aspect of node's layer on top of libuv that is causing significant performance differences. Does that sound right?
Well, its like this. Libuv is written in C. Libuv sits underneath Nodejs. Nodejs is a little bit of C++ and mostly Javascript on top. So the language hasn't really changed, it's just that we're removing a major layer of abstraction :+1:
@hij1nx You can use the same benchmarks as Haywire does. Though i don't see the point of convincing people really because those who are looking for these kinds of "frameworks" knows the performance advantages it provides and often has a very specific need :dancer: