Lucas Kent
Lucas Kent
The current design made sense when I first wrote it as it would let us avoid duplicating connections when the connection supported awaiting on schemas. But we ended up needing...
We already cleared the node pool of dead connections but we forgot to recreate the control connection when it goes down The insert statements were moved outside of test_connection_handles_node_down to...
Added the missing docs for cassandra TLS. Made both commented out by default to match the transform docs.
When I originally named it init_handshake_connection I hadnt come across the term control connection in other drivers yet. Since then everyone on the team has taken to calling it a...
This documents what shotover's error handling for CassandraSinkCluster should look like. Currently we dont handle in the way described here, but I wanted us to agree on an error handling...
We currently handle: * The topology tasks control connection going down * Connections created before shotover detects a node went down * Connections created after shotover detects a node went...
I expect Message cloning to be expensive when we have lots of allocations in the cassandra ast. If we wrapped the Message in an Arc then we could use [.make_mut()](https://doc.rust-lang.org/std/sync/struct.Arc.html#method.make_mut)...
Waiting on: https://github.com/shotover/shotover-proxy/pull/869 Dont bother reviewing this, this still needs a lot of thought and experimentation.
I noticed this docker-compose `build` field which seems like a much better approach than having to manually build each image. I carefully tested it and observed: * no regression in...
when checking the prepared responses returned by the server: * Every response should be taken into account, currently the response sent back to the client is not checked. * The...