Results 34 comments of Brad Lhotsky

FWIW, I set `doMultipleRequestsIfSplit: false` and got `100 workers in 20.183s`: ``` --> 20:47:19 20:47:24 20:47:29 20:47:34 20:47:39

FWIW: Some background, my test script is simulating a load by an existing monitoring tool which does render requests with batches of `target=foo.bar.thing&target=foo.baz.thing` instead of using globs. It's a long...

This is probably the cause of the bug in #509.

I am also experiencing segfaults in 3.7.0, though mine are happening in the `ossec-analysisd` program. Valgrind output: ``` ==68360== Conditional jump or move depends on uninitialised value(s) ==68360== at 0x410FB1B:...

We could, in theory intercept the scroll timeout and rewrite the timeout to a very low number to prevent scrolls tying up too many resources. Per the Elastic docs, we...

As a work-around, I spun up the python graphite-web on the data nodes and pointed `CLUSTER_SERVERS` to that.. it's significantly slower than using `carbonapi/carbonzipper` but it works and only humans...

I _think_ this only pertains to realtime discovered files on Windows as they were passed to the analysisd. So if you had a rule like ``` whatever_syscheck_is_i_forget C:\Windows\System32\MyCustom.dll IMPORTANT FILE...

I wish I could, but we're not deployed on Windows Servers at this point. Is there someone with a Windows Setup that can test the patch and see what breaks?

FWIW: Testing with 0.16.0, things are the same, maybe slightly better. When I run my benchmarking script against `go-graphite/carbonapi`, with 25 concurrent workers making about 40 requests (containing 25 metrics...

Derp, my previous runs checked passive opens when running 100 simultaneous clients, results: ### bookingcom/carbonapi Benchmark took 7.162s. Passive opens on the storage node. ``` --> 01:53:06 01:53:11 01:53:16 01:53:26...