Fabian Groffen
Fabian Groffen
I can see your point, 600 seems too long with this config indeed
Not per-se, depends a bit on how many unique metrics names you're going to spool through this instance :) It will need memory obviously, and it's going to take some...
That's very well possible, any reason why you need to use a consistent hash? Try using any_of, IIRC that may have a better distribution, because it doesn't tie itself to...
I don't quite understand your setup (probably me). I'm assuming you have a main influx of metrics, that goes to carbon-c-relay. c-relay wil then distribute the metrics over the available...
so, you basically have 3x the following: ``` a) metrics -> carbon-c-relay -> go-graphitesvc{1,2,3} b) ... -> go-graphitesvc1 -> backend{1,2,3} ``` You mention a) seems to produce a fair distribution...
I don't know how you checked that, but I assume by checking /var/log/graphite-c-relay.log? Many of your questions depend on your scenario; if you need redundancy, etc. On a local host,...
My experiences with kubernetes are very limited, but carbon-c-relay looks at the number of CPUs via sysctl, so likely it gets the host amount of CPUs, and not those assigned...
In your case, I'd force the amount of workers to be the number you want (e.g. what's assigned to the POD) via using the -w option via your configuration management.
Is this the full metric, or a part of it that gets forwarded? I can imagine somehow that the end of a metric is seen, with its values.
ok, that should be easy to test