redis-async-rs
redis-async-rs copied to clipboard
Memoize `connect` (or at least `paired_connect`)
It makes sense to me that the default behavior of cloning a connection seems to be to re-use the underlying PairedConnectionInner and thus the underlying RespConnection. This is efficient, and it's probably what the user wants anyways.
Would it make sense to memoize connect or paired_connect so that if you pass in the same address, you can re-use the same connection? Of course, I can build that on top of this library, but I feel like it would make this library easier to use with no drawbacks.
I do think pubsub connections have to be on distinct connections than paired connections or than each other, but of course I could be wrong on that. If this is the case, it doesn't make sense to memoize on connect, but I think it still make sense to memoize on paired_connect.
I say "no drawbacks", but of course there are a couple of drawbacks
- Would need to lock the global cache whenever we construct a new connection, and the issue with that is that the process is asynchronous, so it may cause multiple threads to lock up, but I think if it's the same code it would "lock up" on those threads anyways as each one of the threads needs to create a new connection (which probably takes about as long as waiting for the first connection to be created).
- This can be mitigated by storing it as an enum representing either a clonable future resolving to a reference to the connection or the connection itself. That way, we'd only have to lock the value and make it a future if it hasn't loaded yet. But then it takes up more memory, which probably doesn't matter much.
- The global cache would need to be static somewhere. This may affect portability or performance in ways that I personally am not sensitive too. It also would technically mean there's a potential connection leak if the user is completely unaware of this mechanism and connecting to a bunch of Redis instances for some reason. Not sure what the solution would be there, but I think we could cull the cache when all instances of the connection are dropped. I guess whatever's in the global cache would have to be a stub that wouldn't count towards the number of instances of the connection to support this.