Gabriel Reid
Gabriel Reid
Just to clarify, the underlying intention of this PR was twofold: * to make use of the cache file storage that is being done regardless of caching settings and cache_time...
We're encountering this on an internal fork of 1.2.1. However, looking at the `develop` branch, I don't see the cache being purged with a delete operation. Adding the following test...
Aha, thanks for pointing that out @brianhks, my mistake -- I didn't realize that the row key cache was being injected into `CassandraDataStore` and was therefore a shared instance with...
If the intention is to have support writes distributed over multiple KairosDB nodes, while also _reliably_ supporting the pattern of `write -> delete -> write` without any data loss, then...
Just taking a look through this -- having Phoenix support in Sqoop would be great! I know that this is an initial cut, but just a few remarks on the...
An idea that I had as a (hopefully) quite quick and easy way to resolve this would be to tag records with a "generation id" of the subscription that they...
> I think your idea is worth exploring. Just so that I understand: this is only a solution to make invalidation of buffered messages more robust in the `Source`, and...
Thanks for the explanation @ennru What's maybe quite unusual about our situation is that the majority of the incoming consumer messages are simply translated into `ProducerMessage.passThrough`, meaning that there is...
@seglo thanks for pointing that out. It would indeed work for our specific use case to use `Producer.flexiFlow` combined with `Committer.sink`, using `WaitForAck`. I think that the main issue here...
We were using `max-batch = 25`, with all the rest left as defaults when I first started investigating this issue. The rate of message processing would have been around 500...