helm-charts
helm-charts copied to clipboard
[kube-prometheus-stack] how to use persistent volumes instead of emptyDir
Hey fellows, i would like to use persistent volumes instead of emptyDir(by default) config, does anybody how to do that? i would really appreciate an example, getting confused with pv creation and also the pvc
Yeah I would like to know that as well please
So I found the solution for the prometheus stack statefulset. You can either enable it in the values file prometheus.prometheusSpec.storageSpec
or provide the external config file. For instance my config file looks like this:
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
You can then reference this config file when installing or upgrading your helm chart like this
helm install -f prometheus-custom-values.yaml kube-prometheus-stack kube-prometheus-stack -n monitoring
Now I still have to figure out how to enable volume for the AlertManager.
Here is config for alertmanager volume:
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: longhorn-2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Thank you! I think this ticket can be marked as solved.
Did this create a PVC for you? I can't find any in my cluster after applying the prometheusSpec..
Hi I also dont see any PVC created
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: ceph-block
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Ti
and there is just one storageSpec: under the proper prometheusSpec:
thank you
Hi, when trying this way I have an error: failed to provision volume with StorageClass could not create volume in EC2: UnauthorizedOperation: You are not authorized to perform this operation . Directly, when I create a PVC I don't have this error?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Same here, no PVC created after adding the volumeClaimTemplate spec.
same from my side, I specify the following
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: default
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
and there is no pvc created after that
Try this
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: foo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
In my case, It works.
If you missed prometheus
the higher layer of prometheusSpec
, The helm chart's template will not make PVC. (prometheus.prometheusSpec.storageSpec.volumeClaimTemplate
)
Hmm strange
I have it like this and it does not . Do you think there could be conflicting statements in the [Other configs]?
prometheus:
enabled: true
................................ [Other configurations from values.yaml]
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: ceph-block
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Ti
selector:
matchLabels:
app: prometheus
Facing the same issue here (trying to use persistent volumes for prometheus / alertmanager)
I had the same issue. Found these two issues #563 and #655 and am now good.
I'm using kube-prometheus-stack-45.29.0 helm chart. and below are relevant part of my-values
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: cstor-csi-disk
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
adding metadata name under volumeClaimTemplate: was needed for me becuase of the name too long issue/bug
volumeClaimTemplate:
metadata:
name: data
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
i have a suspicion:
I saw that this solution worked only on installing a chart. If you wanted to upgrade it was ignored. i guess the prometheus operator can not handle the migration of one storage (empty dir is the default i guess) to another one and therefore ignores it, because otherwise the data would be just lost.
I do not know if there is a flag to or so to force this change but that could be the solution?
Facing Same Issue
Why not made just a parameter persistent: true
in alertManager/pushgateway and promserver to simplify all this part ?
Why not made just a parameter
persistent: true
in alertManager/pushgateway and promserver to simplify all this part ?
Because Storage is a complex topic and there's no one size fits all like solution (for example Storage Classes and Disk Sizes).
This definitely looks like a bug. I tried installing 55.4.1 and the prometheus PVC would not get created no matter what I tried. I started successively taking lower releases (jumping several at a time), and it finally worked when I tried 48.5.0. So the bug was introduced somewhere between the two versions.
Thank you for the hint, tried with version 55.7.1, but no pvc where created where as with version 48.5.0 it worked.
Hello, I am also getting an error.
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
alertmanager: alertmanagerSpec: storage: volumeClaimTemplate: spec: storageClassName: longhorn-2 accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi
pods and statefulset are in pending state ..no pv, pvc pending
Bunu dene
prometheus: prometheusSpec: storageSpec: volumeClaimTemplate: spec: storageClassName: foo accessModes: - ReadWriteOnce resources: requests: storage: 30Gi
Benim durumumda işe yarıyor.
prometheus
Eğer üst katmanını kaçırırsanızprometheusSpec
, dümen haritasının şablonu PVC olmayacaktır. (prometheus.prometheusSpec.storageSpec.volumeClaimTemplate
)
I tried your yaml but it says pvc pending..it didn't work
This is still in an issue in the latest version 56.16.0.
To be honest, i dont see the issue! Above comments tell you how to add storage. With that, its done..
If the PVCs are pending, they the issue belongs to your infrastructure which is not part of kube-prometheus-stack
Creating the pv
with a label
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-server
labels:
volumeIdentifier: prometheus-server
spec:
...
and using a selector
in the values.yaml
worked for me.
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: "csi-cephfs-sc"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
selector:
matchLabels:
volumeIdentifier: prometheus-server
It was not neccessary with 48.5.0
, but now it works fine for me.
I upgraded the chart to 58.1.3 and the CRDs accordingly. It is working.
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: xxxxx
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 500Gi
## Using tmpfs volume
##
# emptyDir: <-------------- comment out
# medium: Memory
For me, had everything correct. Had to comment out the emptyDir right below...
I have the same issue with the chart 61.7.1. If I create a PV manually, it is not used, and the DB is not persistent. It actually remains an emptyDir
in the definition...