solr
solr copied to clipboard
SOLR-13350: Multithreaded search
Abandoning previous PRs to create this new one.
https://issues.apache.org/jira/browse/SOLR-13350
Observation: I don't think we need the concurrent DocSet builder (the one from Netflix). Couldn't we build segment level FixedBitSets (no safety issue) and then at the end combine into a master FixedBitSet so that we have our DocSet? It could even be made to be long-aligned, and thus the final aggregation is just copying longs with System.arraycopy (no doc iteration! or long shifting). The boundary long would be XOR.
Observation: I don't think we need the concurrent DocSet builder (the one from Netflix). Couldn't we build segment level FixedBitSets (no safety issue) and then at the end combine into a master FixedBitSet so that we have our DocSet? It could even be made to be long-aligned, and thus the final aggregation is just copying longs with System.arraycopy (no doc iteration! or long shifting). The boundary long would be XOR.
@dsmiley - is something like https://github.com/apache/solr/pull/2248/commits/34c7db09e08693a79d87eaa6e43ad6f5aacbc342 what you had in mind here? For starters modelled on https://github.com/apache/lucene/blob/releases/lucene/9.10.0/lucene/test-framework/src/java/org/apache/lucene/tests/search/FixedBitSetCollector.java i.e. no System.arraycopy etc. as yet.
@chatman - i hope you don't mind me adding commits to this PR.
@cpoerschke https://github.com/apache/solr/commit/34c7db09e08693a79d87eaa6e43ad6f5aacbc342 No; that is a single FixedBitSet and as such you've basically changed this to Solr's existing default handling, more or less. But that's not thread-safe!
@cpoerschke 34c7db0 No; that is a single FixedBitSet and as such you've basically changed this to Solr's existing default handling, more or less. But that's not thread-safe!
Hmm, but each FixedBitSet is in a FixedBitSetCollector and multiple FixedBitSetCollector objects are created via the CollectorManager.newCollector() calls and then merged via the CollectorManager.reduce() call into a separate joint FixedBitSet? Anyhow, I can revert the commit, that's what version control is for sometimes.
Commit reverted, though I still have doubts i.e. maybe the commit message was just unhelpful i.e. "replace ThreadSafeBitSet[Collector] with FixedBitSet[Collector]" is mechanical whereas a more behavioural commit message would have been the longer "replace one ThreadSafeBitSet and multiple ThreadSafeBitSetCollector(s) with multiple FixedBitSetCollector(s) containing one FixedBitSet each, to be merged into one FixedBitSet at the end" or similar.
Sorry; I didn't look at your change enough! I agree that it should work, and is probably better than having a concurrent ThreadSafeBitSet.
... Couldn't we build segment level FixedBitSets (no safety issue) and then at the end combine into a master FixedBitSet so that we have our DocSet? It could even be made to be long-aligned, and thus the final aggregation is just copying longs with System.arraycopy (no doc iteration! or long shifting). The boundary long would be XOR.
Please could you share in more detail what you had in mind here? E.g.
- Did
FixedBitSetrefer literally to the https://github.com/apache/lucene/blob/releases/lucene/9.10.0/lucene/core/src/java/org/apache/lucene/util/FixedBitSet.java class or was it meant conceptually i.e. an existing or new class similar to but different fromFixedBitSet? - Is there existing similar code that does the long-aligning and final aggregation like that elsewhere?
With the current implementation https://github.com/apache/solr/blob/bec277ab17293aee8fe44ab65264ddb0f6b697d5/solr/core/src/java/org/apache/solr/search/MultiThreadedSearcher.java#L194-L204 I can see
- that the
maxDocis excessive not least becausethis.docBaseis added for every document in the segment, and - that conceptually aggregation can be done via copying,
- but beyond that I haven't yet connected the ideas shared above.
Thanks!
Yes I mean the FixedBitSet (otherwise I just would have said a bit set). No I don't know of similar code doing this, and I don't expect it's in Lucene/Solr already. Lucene is very segment oriented so there's no need for an entire index bit set view. Solr has forever clung to index level bit sets, for better/worse (I think worse). I'm tempted to help but not sure I have the time. It's not a blocker any way; the code can be improved.
Currently the TestQueryLimits.java test is failing with this branch. I'm looking into this. Any ideas, please, @sigram? My intuition in that the execution isn't breaching the limits in the test, and hence full results are coming for QueryComponent, instead of partial results.
Yes I mean the FixedBitSet (otherwise I just would have said a bit set). No I don't know of similar code doing this, and I don't expect it's in Lucene/Solr already. Lucene is very segment oriented so there's no need for an entire index bit set view. Solr has forever clung to index level bit sets, for better/worse (I think worse). I'm tempted to help but not sure I have the time. It's not a blocker any way; the code can be improved.
Okay, I've got the basic outline of the idea I think:
final FixedBitSet bitsetA = new FixedBitSet(numBits);
bitsetA.set(...);
final int skipWords = 8;
final int skipBits = skipWords * 64;
final FixedBitSet bitsetB = new FixedBitSet(numBits - skipBits);
bitsetB.set(... - skipBits);
final long[] storedBits = new long[FixedBitSet.bits2words(numBits)];
System.arraycopy(bitsetA.getBits(), 0, storedBits, 0, bitsetA.getBits().length);
System.arraycopy(bitsetB.getBits(), 0, storedBits, skipWords, bitsetB.getBits().length);
final FixedBitSet bitsetC = new FixedBitSet(storedBits, numBits);
And then as a next step I'll have a go at applying that to the collecting code here on this PR.
I figure that's somewhat pseudocode... so I don't want to over-critique that but it should have the XOR of the boundary long in-between each arraycopy. In practice you will likely loop the FixedBitSets, skipping the first in order to XOR the last long of the previous FixedBitSet with the first long of the current one.
So the https://github.com/apache/solr/pull/2248/commits/fdfef9f734d091db7298d3493681d7f07f50c043 commit starts to explore using segment level bit sets ...
... but actually I don't yet quite see what exactly that would 'buy' since conceptually then we'd need to maintain multiple sets of maxDoc - docBase size and in that case why not just have one set of maxDoc then? ...
... unless we can somehow size the segment level bit sets to be the size of the segments?
Definitely the segment bit sets need to be sized to the segment, plus no more than 63 bits for long-alignment.
Definitely the segment bit sets need to be sized to the segment, plus no more than 63 bits for long-alignment.
https://github.com/apache/solr/pull/2248/commits/b1ac97649081aed058a431896f236756100066d1 is a first take on segment bit sets.
the no-more-than-63-bits-for-long-alignment seems easy, how to size the segment bit sets needs a bit more thought.
I think that's an anti-pattern or broken and isn't what I meant in JIRA. We could use a SynchronousQueue (with fairness) if we want to block for a thread -- probably what we should do. FYI that queue is the default for Executors.newCachedThreadPool(). The "caller runs" behavior I meant could be done via an ExecutorService delegate that catches RejectedException and simply runs the Runnable.
@dsmiley instead of using a rejected tasks execution handler, I went with @noblepaul 's suggestion of having a reasonably large queue for the threadpool (number of threads * 1000). Beyond this, if tasks are submitted, it is okay to reject them. We can revisit these limits later as well.
Thanks @cpoerschke and @dsmiley. I've merged to main, where this can bake for some days before merging to branch_9x. If there are any major outstanding issues, or any changes needed to documentation or default behaviour, we can take it up in another PR.
Merging to main was unexpected to me because of the healthy code review happening here didn't conclude. Next time please announce an intention to do so in a couple days. I would have given this another look! For example I thought the DocSet merging aspect was still tentative... I was awaiting another comment from Christine. And I was looking forward to checking out the thread pool aspect again.
This pull request is closed, but the jira/solr-13350 branch has unmerged commits.
Not sure what didn't make it
Merging to main was unexpected to me because of the healthy code review happening here didn't conclude.
I didn't want this to languish for a long time, and there were large PRs in the waiting that could affect this PR. Example: https://github.com/apache/solr/pull/2382
I would have given this another look!
...
And I was looking forward to checking out the thread pool aspect again.
Please feel free to look at it, would be happy to address all loose ends, if any.
Merging to main was unexpected to me because of the healthy code review happening here didn't conclude. ...
I was surprised by this merge too and my initial thought was that it might have been an accident and well that happens sometimes and we can just revert the commit and resume on a new PR then.
The discussion on this is long, so maybe I've missed it, but the actual merged code has introduced the possibility (though I suspect it might never happen) of a non-numeric Max Score...
public float getMaxScore(int totalHits) {
if (totalHits > 0) {
for (Object res : result) {
if (res instanceof MaxScoreResult) {
return ((MaxScoreResult) res).maxScore;
}
}
return Float.NaN; <<<<<<<<<<<<<<<<<<<<<<
} else {
return 0.0f;
}
}
Did I miss discussion of this?