Oliver Gould

Results 257 comments of Oliver Gould

@hawkw imagine a load balancer with 1000 endpoints. Initially, and in generally worst-case (big-o) situations, the balancer will have 1000 unready services. Every time you try to call poll_ready on...

@hawkw I see. So your question is: "why do we have to drive inner services to readiness at all?" If so, that's a good question: a given TLS connection may...

@jonhoo thanks for pinging this. To my mind, there are a few issues I want to address before releasing this crate: 1. Endpoint weighting: I'd like to reintroduce endpoint-weighting. This...

@jonhoo I've started a [branch](https://github.com/tower-rs/tower/compare/master...olix0r:ver/ready-services?expand=1) that splits out the core service caching logic out of the balancer. I expect that this could satisfy your pool use cases (which I have...

As @hawkw mentions, there are some places where we really, truly need `poll_ready`. However, I'm definitely sympathetic to the issues you describe about the contract--that cloning invalidates any prior readiness,...

@davidpdrsn with the split traits, Axum could simply depend on `S: Call + Clone`, documenting explicitly that its inner service doesn't do anything to readiness; and we could provide utilities...

@danburkert I think that request-level retry strategies should be handled at a higher level. Reconnect should really only be addressing the layer 4 concern of establishing a transport. I'd expect...

I'm a little unclear on this: > By the trait methods having mutable references it means that for each retry request session you can mutate the local policy. This is...

I recently hit [this lugubrious error](https://gist.githubusercontent.com/olix0r/5789706848a6477817834bcea53320d7/raw/3a327dd567d26a88102c94dba67c2ba416e520e7/check.out) after switching to tower-layer when adding a simple layer: ```rust pub mod weight { use futures::{Poll,Future}; use super::tower_balance::{HasWeight, Weight, Weighted}; use svc; #[derive(Clone, Debug)]...

We should probably just replace the default inet resolver throughout linkerd. i don't think that's ever good behavior for linkerd?