yabeda-rails
yabeda-rails copied to clipboard
Not working for clustered puma
When running yabeda rails for clustered puma and does not work for streaming metrics, any ideas?
We are running it via puma config bundle exec puma -C config/puma.rb
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.7 1.3 1415016 217088 ? Ssl 13:05 0:09 puma 3.12.0 (tcp://0.0.0.0:3000) [app]
root 22 0.1 1.3 1632116 214932 ? Sl 13:06 0:01 puma: cluster worker 0: 1 [app]
root 27 0.0 1.1 1565552 189348 ? Sl 13:06 0:00 puma: cluster worker 1: 1 [app]
root 32 0.4 1.6 2806432 264888 ? Sl 13:06 0:05 puma: cluster worker 2: 1 [app]
root 37 0.0 1.1 1565552 189680 ? Sl 13:06 0:00 puma: cluster worker 3: 1 [app]
root 307 0.0 0.0 4296 728 pts/0 Ss 13:15 0:00 sh
root 581 0.0 0.0 36644 2828 pts/0 R+ 13:26 0:00 ps aux
Try replacing default prometheus-client with prometheus-client-mmap:
# Gemfile
gem "yabeda"
gem "yabeda-rails"
gem "prometheus-client-mmap"
gem "yabeda-prometheus"
See https://github.com/yabeda-rb/yabeda-prometheus/issues/4 for discussion and more information
hey, @mariokam support of gitlab client moved to another gem https://github.com/yabeda-rb/yabeda-prometheus-mmap Try it out.
Ended up using yabeda-prometheus-mmap
too. Works great!
Unfortuantely this doesnt work currently with:
gem 'yabeda'
gem 'yabeda-rails'
gem 'yabeda-puma-plugin'
gem 'yabeda-prometheus-mmap'
Individual workers supply their own metrics (not aggregated)
That's weird. Can you please create reproduction? E.g. write a Ruby script using inline bundler and publish it as a secret gist? Also please show exact versions of all related gems – maybe some dependencies got updated…
Also, just to confirm that by “Individual workers” you mean forks of main Puma process that aren't synced (so you have single /metrics
endpoint that respond with metrics from different child worker process on every request). Because different Puma clusters on different machines/containers won't be synced – this is Prometheus' work to scrape each and aggregate.
I've solved this, it took a while but the answer was calling Prometheus client configure before fork
If you fail in that it's initialised after fork and multiple directories in /tmp created.