helm-charts
helm-charts copied to clipboard
Dashboards end up as empty files on disk
Chart: 6.2.1
Basically the issue is the same as here: https://github.com/helm/charts/issues/22464
Files do appear in the volume but are 0 bytes.
I can write into them myself and read it back, so the volume itself is OK.
I can import these dashboards via grafana. It is only the provisioning that fails.
t=2021-01-30T20:20:55+0000 lvl=eror msg="failed to load dashboard from " logger=provisioning.dashboard type=file name=default file=/var/lib/grafana/dashboards/default/kube-eagle.json error=EOF
t=2021-01-30T20:20:55+0000 lvl=eror msg="failed to load dashboard from " logger=provisioning.dashboard type=file name=default file=/var/lib/grafana/dashboards/default/node-exporter.json error=EOF
t=2021-01-30T20:20:55+0000 lvl=eror msg="failed to load dashboard from " logger=provisioning.dashboard type=file name=default file=/var/lib/grafana/dashboards/default/elasticsearch.json error=EOF
None of the other pods contain any errors regarding failed downloads.
values:
persistence:
type: statefulset
enabled: true
size: 1Gi
admin:
existingSecret: grafana-admin
sidecar:
datasources:
enabled: true
dashboards:
enabled: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards/default
dashboards:
default:
kube-eagle:
# https://grafana.com/grafana/dashboards/9871/revisions
gnetId: 9871
revision: 2
datasource: Prometheus
elasticsearch:
# https://grafana.com/grafana/dashboards/2322/revisions
gnetId: 2322
revision: 4
datasource: Prometheus
node-exporter:
# https://grafana.com/grafana/dashboards/1860/revisions
gnetId: 1860
revision: 22
datasource: Prometheus
Note that it worked for me w/ chart version 6.1.16:
config excerpt:
values:
- dashboards:
default:
aws-billing:
# Ref: https://grafana.com/dashboards/139
gnetId: 139
revision: 15
datasource: CloudWatch
redis:
# Ref: https://grafana.com/dashboards/969
gnetId: 969
revision: 3
datasource: CloudWatch
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
folder: ''
options:
path: /var/lib/grafana/dashboards/default
Grafana chart 6.3.0 seemed to resolve this issue for me.
I deleted grafana + pvc and started from scratch, and everything just started working again. Dashboards got provisioned properly. 👍
I'm having this problem, even with chart 6.3.0, as far as I can see chart 6.3.0 doesn't change anything regarding dashboards or Persistent Volumes only bumps grafana image version.
I'm using helm 3.5.0 and Kubernetes 1.19. ConfigMap entries are still empty when loading from a file (.json)
same here, destination file is empy error=EOF
I'm having the same issue. zero byte files and the error=EOF. I'm using grafana version 6.16.5
I'm having the same issue. zero byte files and the error=EOF. I'm using grafana version 6.16.5
Actually resolved the issue. The dashboard json files could not be found during the helm install so it created zero byte files. As soon as I rectified this, the correct files appeared in the right place.
Related (possibly duplicate) issues include #764 and #27 .
The dashboard curl init container silently drops errors with the -s and f options (curl -skf ...) despite the stdout redirect and just writes an empty file on failure without telling you what happened. Luckily, the main grafana container ships with wget to help troubleshoot.
For me, I noticed that grafana.com was not resolving, but grafana.com. was. Might be due to my search or options ndots:5 resolver config or some other weird nocloud DNS problems, but I'm not sure. This means that using gnetId doesn't work as expected since the grafana API URL is hard-coded. As a workaround, manually specifying via url with the absolute FQDN does the trick and the files are created with non-zero size.
Instead of:
dashboards:
ceph:
ceph-cluster:
gnetId: 2842
revision: 14
datasource: Prometheus
Try using a url, notice the extra . in https://grafana.com./api:
dashboards:
ceph:
ceph-cluster:
url: https://grafana.com./api/dashboards/2842/revisions/14/download
datasource: Prometheus
The url format is https://grafana.com./api/dashboards/{{ gnetId }}/revisions/{{ revision }}/download. datasource and base64Content still work as expected.
For reference, check the source for download_dashboards.sh: https://github.com/grafana/helm-charts/blob/589022e7e2680f0f6dd99ae6c9d015755fef6e5d/charts/grafana/templates/configmap.yaml#L52-L82
Don't forget, you'll also need the properties of dashboards to map the names of your dashboardProviders, i.e.:
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'ceph'
folder: 'Ceph'
orgId: 1
type: file
disableDeletion: true
allowUiUpdates: false
options:
path: /var/lib/grafana/dashboards/ceph
- name: 'nginx'
folder: 'NginX'
orgId: 1
type: file
disableDeletion: true
allowUiUpdates: false
options:
path: /var/lib/grafana/dashboards/nginx
# dashboards per provider, use `dashboardsProvider.*.providers[].name` as key.
dashboards:
ceph:
ceph-cluster:
url: https://grafana.com./api/dashboards/2842/revisions/14/download
datasource: Prometheus
nginx:
nginx-ingress:
url: https://grafana.com./api/dashboards/9614/revisions/1/download
datasource: Prometheus
I'm having the same issue. zero byte files and the error=EOF. I'm using grafana version 6.16.5
Actually resolved the issue. The dashboard json files could not be found during the helm install so it created zero byte files. As soon as I rectified this, the correct files appeared in the right place.
Hi please can you elaborate on what you did exactly to fix this? How did you rectify the dashboard files so they could be found?
Hey, did someone solve this?
This is also a problem when links to dashboards return HTTP redirects. Files end up existing, but being 0B.