Implement socket options struct and builder for socket with options
Currently socket doesn't provide any means to customize it's behaviour. Some options are required by ZMQ spec (for example high water mark for message buffers) other would be nice to have (for example number of retries on reconnect before returning an error)
Similar: https://github.com/zeromq/zmq.rs/issues/131
Was about to create new issue for ability to specify socket's identity - right now PeerIdentities are by uuid only. But I guess this falls under the builder issue.
Yeah. I'm pretty sure it is. I'll try to work on it on a weekend if I have time..
I've added a prototype implementation.. It looks a bit bulky in terms of small changes here and there however I don't know a better way of doing it right now..
Looks pretty functional to me, can perform userspace routing now - Thanks!
@CorinJG cool. Merged it to master. Let me know if there are other minor changes that might help you. I don't have much time to work on library but I'm trying to help people with specific requests
That's very kind of you, thanks a lot:)
I could be wrong but I can't see that TCP Nagle is disabled anywhere in the lib (eg. simply at socket creation via set_nodelay) - tokio::net::TcpStream sockets enable Nagle by default. It should be disabled in the ZMQ protocol as the batching is performed on the ZMQ level so Nagle can only hurt throughput. If it is disabled and I just couldn't see it, sorry!
@CorinJG it seems you are right. However I think this should be tracked as a separate issue.
@Alexei-Kornienko Yes that's sensible, looks like it's part of the ZeroMQ protocol so shouldn't be a socket option.
@Alexei-Kornienko as #133 added only the peer identity, no other options are implemented at the moment? We are interested in replacing our dependency on the from-source FFI version (which we currently need to vendor for cross-compilation), but would need send/receive timeouts (ZMQ_SNDTIMEO, ZMQ_RCVTIMEO) to be supported. Is there a way to achieve this already through zmq.rs or would they need to be added (and implemented) here?
@domenicquirl I'm afraid they're not supported - like several other options.
@poyea do you have an estimate of how much effort it would be to add timeouts / where they would need to be handled (besides adding the option itself)? Could I help with a limited part of my work-time without being familiar with the codebase?
It's hard to tell, but I'd say maybe a medium-sized change for ZMQ_{SND,RCV}TIMEO. I can imagine, at that Socket level, we need a clock. Then, we handle failures separately according to the option. Re-visiting https://github.com/zeromq/libzmq/blob/master/src/socket_base.cpp#L1378 may also help.
Following up on this after looking through the stack over the week-end. I am still unsure where the correct place to put a timeout would be here, in particular due to the buffering that happens as part of Framed{Read,Write}. If there is buffer space available, poll_ready will always be Ready, independently of the state of the socket. Only if the buffer is full would poll_ready attempt to write it out. Here, as in poll_flush, we would then go down to the runtime and things depend - e.g., tokio offers things like readable and try_read (and similar for writing) for UnixSocket, whereas async_std only implements Async{Read,Write} (looking at the IPC case here). start_send, meanwhile, always just encodes into the buffer.
All of this somewhat makes sense for an async context, but since all Pending responses translate to BufferFull (because, well, they only happen when the buffer is full), any messages here would be lost? Hence my remaining confusion as to how to add a sensible timeout given that for poll_ready there either is a buffer space and we don't block but also don't actually send yet, or there is no space and if we can't flush the message is dropped.
For receiving, the SUB socket only awaits its queue. Could a timeout be added for this operation, i.e. is the queue cancel-safe? That would already help out a lot - the main problem for us is still that we cannot have the task that receives updates block unboundedly. A try_send/try_recv would be a different angle to solve that problem, but probably leads to the same design considerations.