aedes
aedes copied to clipboard
Make tests that after connected/clientReady event.
We assumed that client connection is ready when writing mqtt packets, we strengthen tests not to have race conditions.
I think this depends on what we decide to do with #362
@robertsLando Yes, it's inspired by #362
@mcollina I have added those tests in my PR.
Is this still required even if we have decided to queue incoming packets?
I think we should wait to land #362
@mcollina So what we do now with this now? Is it unecessary?
This currently conflicts.
I'm not opposed to this landing.
@mcollina CI fixed
Please hold a bit, I’m going to make a big change. This PR may be closed after that if my work is successful. Btw, I prefer to use rebase rather than merge
@gnought :+1:
@gnought Any news about this?
@robertsLando will keep you posted. my upcoming changes will be
- replace the packet queue before connected event
- redesign the _nextBatch
The upcoming changes have been ready in my local working copy. I will submit a PR after the following PR has been merged into the main branch. https://github.com/moscajs/aedes/pull/409 https://github.com/moscajs/aedes/pull/408 https://github.com/moscajs/aedes/pull/416
The PR will be big, and simplify a lot. After I submit the PR, this PR could be closed, and I will need all your help how to deal with the protocol decoder.
@robertsLando Here is an early access (https://github.com/gnought/aedes/commits/next) what my next PR will be after migrating #408, #416 and #418 into the main branch.
The main changes are introducing a mqtt-stream-parser
to replace the _nextBatch
and _queue
. However we still need a way to better work on the PROXY decoding.
@gnought Did you made any performance testing?
How do you handle packets while client is connecting as you removed the queueLimit? I saw you have paused client in connect handle.
About PROXY decoding, the hook should be added here: https://github.com/gnought/aedes/blob/next/lib/mqtt-stream-parser.js#L19
Like we were doing in nextBatch before.
I also suggest to start adding some more comments to the code to make it more clear why we do some operations. Like I said for client pause, it could be difficult for new developers understand how the code works
@robertsLando when we pause the stream, NodeJS will do buffering itself (https://nodejs.org/api/stream.html#stream_buffering), and we could still get those buffered data after resume. Constructing our own queue is a rewheel their work.
We could do some comments on coding side after we polish it. :)
@gnought Interesting :) didn't know about that, and how could we handle an overflow so? Could we set a limit someway?
I answer to myself: highWaterMark
option is used for that. Should this become an aedes option?
Does we handle case when stream write returns false
? (stream buffer limit is reached)
I the chunk
content in the process
function is the same as buf
in nextBatch
, like @robertsLando i also suppose it should take place before the parser.parse method.
@gnought have you tried your aedes branch behind a proxy yet ?
@gnought What about your next
branch? Would you mind to submit a pr?
Will do. Let see if I can break it in a smaller pr. and how the PROXY architecture should be
@ robertsLando What I understand is pipe()
will do the overflow job in a smart way. https://github.com/nodejs/node/blob/1afec971304e5c6294bbf61bddb39b55e76b4672/lib/_stream_readable.js#L702-L720
@gnought I don't understand, the link you pointed me doesn't seems to handle the overflow... We should handle it