trinity-1686a
trinity-1686a
I've modified slightly the bench so it does not include cloning the `Vec` and getting an index writer. ``` index-hdfs/index-hdfs-no-commit time: [5.8195 s 5.9066 s 5.9877 s] index-hdfs/index-hdfs-with-array-no-commit time: [8.3026...
interestingly halving the default text capacity makes both benchmarks slightly slower on my system (though possibly within margin of error). This might be very allocator dependent, I guess mine is...
after #2062, the main differences I see left in flamegraphs are: - dropping the document being indexed (almost negligible with full strings, around 9% of samples for pre-split strings, creating...
there have been substantial progress wrt this issue. It's still somewhat slower, but a lot less than before. I'm unlikely to work more on this right now, so I'm unassigning...
that vulnerability is currently not triggerable in anyway that's actually useful to an attacker that I can think of, but it will be when #368 gets implemented. I don't think...
Plume should match on the `application/activity+json; q=0.9` part but fails to because it doesn't understand the `q=0.9` part (which it should), however the profile for the 2nd content type seems...
I tried it and it seems somehow broken. After a while, I get this: ``` ⠋ [00:20:00] 20.82 GiB/20.82 GiB (91.4 MiB/s) ``` Ingestion rate should be in the order...
if that helps, it seems to only take a few discrete values, a few of which are 51.4, 45.7, 40, 62.8, 57.1, 74.2, that seems to be increments of 5.7...
the estimator always seems to give a multiple of 5.71 for this workload. This is because it essentially count batches of (quasi-)fixed size, and there aren't that many batches processed...
I confirmed that by ingesting a single split with the field 'tagged' tagged, and which takes only the values '1' and '2', searching for `* -tagged:1`, which should yield exactly...