release 0.43 - 10x increase in memory usage (~50Mb -> ~500Mb)
Describe the bug Release 0.43 increases the memory usage of the metrics container dramatically compared to 0.42
To Reproduce Just switching the container version reproduces consistently
Expected behavior Similar memory usage
Logs
On the LHS is v0.40, the middle is 0.43, the RHS is 0.42
Environment This is a percona operator for mongodb managed deployment of mongo 7.0.15 running on microk8s 1.31 on ubuntu 24.04
Hi, could you share more details about cluster being monitored. Is it sharded cluster or just replicaset? Is it mongos, mongod, arbiter or config node? Did you enable all collectors? Can you build mongodb_exporter from main and check if the problem fixed on your side? We recently faced similar problem with Mongo 8 and improved performance a bit, but we are not sure that it will fix your problem.
Hi, thanks for the swift response.
We have basic mongo replica set with 3 x mongod. (We have 5 mongo clusters which have similar config and I saw the same problem on all of them).
The config for the exporter is:
sidecars:
- image: percona/mongodb_exporter:0.42 # 0.43
env:
- name: EXPORTER_USER
valueFrom:
secretKeyRef:
name: mongo-psmdb-db-secrets
key: MONGODB_CLUSTER_MONITOR_USER
- name: EXPORTER_PASS
valueFrom:
secretKeyRef:
name: mongo-psmdb-db-secrets
key: MONGODB_CLUSTER_MONITOR_PASSWORD
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MONGODB_URI
value: "mongodb://$(EXPORTER_USER):$(EXPORTER_PASS)@$(POD_IP):27017"
args: ["--discovering-mode", "--compatible-mode", "--collect-all", "--log.level=debug", "--mongodb.uri=$(MONGODB_URI)"]
name: metrics
I can have a go at building a container, are there some instructions on how to do so? I'm not familiar with go.
We released v0.43.1, please check if it fixes a problem. Also would be great to have heap by calling
curl -sK -v http://localhost:42000/debug/pprof/heap > heap.out
replace localhost:42000 with your exporter host and port.
We added more metrics to collstats collector, it could cause your problem as well.
Thank you, yeah, I was right, it's because of new metrics in collstats we implemented for https://github.com/percona/mongodb_exporter/issues/897. Let me think how can we make it configurable.
If you don't use metrics from collstats instead of enabling all collectors you can enable only the ones you use. How to enable only part of collectors you can find in https://github.com/percona/mongodb_exporter/blob/main/REFERENCE.md.
@blushingpenguin could you please also share amount of collections and indexes in them?
@BupycHuk Hi. It would be great to make it configurable. Memory usage 50 -> 500 might be much less valuable issue than increasing time series count. I got 10x incresing after upgrade 0.42.0 -> 0.43.1 just because of those changes.
As you can see in #283 which introduced storage stats, portion of stat was ignored exactly to not blow up your metric collector/storage.
This is intentionally filtering out any information from storageStats.wiredTiger and storageStats.indexDetails since it can lead to a high-cardinality issues
project := bson.D{
{
Key: "$project", Value: bson.M{
"storageStats.wiredTiger": 0,
"storageStats.indexDetails": 0,
},
},
}
Hi @psapezhka , I've already created PR for that https://github.com/percona/mongodb_exporter/pull/997. Currently I'm on vacation will proceed with merging next week
@blushingpenguin could you please also share amount of collections and indexes in them?
Sorry for the delay! We've currently got totals of 17 databases, 327 collections, 522 indexes