BunSpreader icon indicating copy to clipboard operation
BunSpreader copied to clipboard

go server on channels

Open lem8r opened this issue 2 years ago • 11 comments

Look ma, no mutex

lem8r avatar Jul 26 '22 14:07 lem8r

btw, I was able to hang my machine by setting to high value of count arg of rust client. It ate all 16 GB of RAM.

lem8r avatar Jul 26 '22 14:07 lem8r

OH RLY, let me review

ThePrimeagen avatar Jul 26 '22 15:07 ThePrimeagen

Also... I thought channels have Mutices below

ThePrimeagen avatar Jul 26 '22 15:07 ThePrimeagen

@lem8r what is your discord handle?

ThePrimeagen avatar Jul 26 '22 15:07 ThePrimeagen

Also... I thought channels have Mutices below

Indeed, there must be some kind of locking. I was wondering how true Go approach will affect the performance. And it is almost identical to queue with mutex,

OH RLY, let me review

count = 1M gives 1GB of RAM for a client process. I think it spawns too many TOKIOS that are waiting for a connection limit.

@lem8r what is your discord handle?

I'm sorry I don't have one

lem8r avatar Jul 26 '22 17:07 lem8r

how about this. take server.go (the original) and put it in its own package (so we can have 2 mains) and then your implementation. That way I can potentially play around with it either on stream or YT video.

ThePrimeagen avatar Jul 26 '22 19:07 ThePrimeagen

I did it. Now there are two server packages and Dockerfiles. My tests shown that channel version is slightly slower. Also tried to disable GC (GOGC=off env) - no impact.

lem8r avatar Jul 26 '22 21:07 lem8r

I thought what if mutex isn't bottleneck but http stack is. So I made separate commit with server.go on fasthttp stack with queue on mutex. And it spreads wider.

lem8r avatar Jul 27 '22 07:07 lem8r

This is awesome

Let me check this all out.

ThePrimeagen avatar Jul 27 '22 16:07 ThePrimeagen

omg, fasthttp version is almost as fast as rust

lem8r avatar Jul 28 '22 19:07 lem8r

It is worth noting that fasthttp provides Request and Response pools as a way to avoid expensive memory allocations by re-using old buffers. When you release requests and responses back to their pools, those buffers can be re-acquired by a later request. As a result, the buffers only need to be allocated once, and perhaps occasionally resized. You must be sure not to use the response buffer after it has been released back to the pool in order to avoid data races but this allows the http server to not be burdened as much with the garbage collector.

fasthttp can be amazing under the right circumstances but can be frustrating that some things need to be managed explicitly that are handled for you automatically with net/http. Using a framework like atreugo or fiber might be helpful for some of the caveats of fasthttp in more complex situations.

dasper avatar Jul 28 '22 23:07 dasper