DENNIS
DENNIS
@kfaraz The cold tier of our cluster has enough historicals. The current problem is that we don't want to _default_tier's storage is full too fast. I will create a PR...
Today, another cluster of ours (without any modification) once again experienced the problem that the expired data of the hot node was not deleted, causing the storage space to be...
> @599166320, thanks for the update! I'll take another look soon. > > I noticed that this is a new PR that replaces the existing, closed one. As it turns...
@paul-rogers Recently, I often see an `integration-tests` error in `Travis`. It seems to me that it has nothing to do with this PR. It may be caused by instability of...
> @599166320, I took an in-depth look at the code, including downloading your branch and stepping through the logic. The good news is that your unit test worked the first...
> @599166320, to follow-up a bit: see the summary above: the one that tries to summarize the approach. If that is correct, then the simplest solution is: > > *...
@paul-rogers I have done the following work in this commit: 1. For the sorting of ordinary columns, when traversing the segment, I prevent the scanquery object from passing the orderByLimit...
> One more thing to note here is that Druid allows sorting of segments by more than just `__time`. We can order (I believe) a segment by, say, `(__time, a,...
> The other thing to discuss is the ordered merge steps. As it turns out, the need to do the ordered merge is independent of how we do the sort....
@paul-rogers Let me briefly summarize the problems you mentioned above. Based on this PR, I have two things to do next: 1. Code implementation of special path(`Segment by segment decision`)...