kanaloa icon indicating copy to clipboard operation
kanaloa copied to clipboard

Make your service more resilient by providing protection against traffic oversaturation

Results 25 kanaloa issues
Sort by recently updated
recently updated
newest added

Right now it is the mechanism PullingDispatcher shutdown on queue completion. However it's not very robust for pushing dispatcher - it prevents the queue from automatically restarted when errors occur....

if statsD goes offline and then back online, statsD reporter stops sending metrics.

bug

when the latency is below certain threshold, a pushing dispatcher can be in a direct mode that bypass Queue and Worker and directly send the work to backend without any...

API change
comp:work execution

that way we can have per routee metrics

As suggested by @nsauro One idea is to provide a Backoff trait, which gets called for every timeout. An instance would need to be created per worker, and it is...

Each routee having its own worker pool and autothrottle allows us to throttle concurrency per routee, which makes sense since routees differ in performance.

comp:worker management

When creating a Dispatcher right now, the Dispatcher constructor returns immediately, even though behind the scenes there is still some initialization going on. We should change this to return a...

enhancement
scope:medium
discussion needed
API change

when the backend is behaving completely irregularly (no apparent patten in throughput and latency) kanaloa should at least ensure it is not a limit factor.

discussion needed

Dispatcher should have a way to fail either a signal from the actual backend (tied to #79) or it fails after it sees the all worker dies (tied to #123...

discussion needed
comp:work execution

which will include tests and integration tests see project/Publishing.scala for example of managing release steps

Priority: Low