tiberius icon indicating copy to clipboard operation
tiberius copied to clipboard

Prevent corrupting connections when cancelling futures

Open pimeys opened this issue 5 years ago • 1 comments

This has not bubbled up yet, but I highly suspect we're going to hit problems when cancelling futures reading or writing data with Tiberius.

This means, if we use select functionality between a Tiberius future with another, such as a timer, and if we're just reading or writing data to the wire, this operation will be stopped when the timer is ready. At this point the wire is in a dysfunctional state, where either it has partial data from the previous request still awaiting, or we wrote some of the data, but not all.

This means if implementing a service, that for example reads input from a field to perform a search, that is cancelled when the user presses more keys to replace the query with a new one, the connection probably ends into a state where we must reconnect for it to work again.

Some ways of fixing this:

  • Mark a connection as dirty until all data has been read and written
  • Implement the streams so they just peek the data until we know the headers, and only then reading the full packet
  • And then, somehow before every new query, check the wire status and if its dirty, find a way to clean it before triggering a new query.

pimeys avatar Aug 29 '20 12:08 pimeys

Talked about this in-person with @pimeys today. This is crucial for the correct working of the driver. So far my hunch is that what we need to do is move the parsing layer from the future into the Client.

That way if a future is cancelled midway through, the client knows which task was started on, and the next query can run the previous task to completion before starting the next task.


Another thing we discussed is about TCP pooling: we need to signal that if the client errors out, the connection needs to be dropped and reopened. This prevents the TCP connection from existing in an invalid state.

yoshuawuyts avatar Sep 15 '20 15:09 yoshuawuyts