tidb-dashboard
tidb-dashboard copied to clipboard
TiDB dashboard not working
TiDB Dashboard not working
Hey, I'm running into an issue with TiDB's dashboard. I deployed TiDB using the helm charts, and everything seems to have gone through fine. However, I can't login or use the dashboard. Trying to login, I get Sign in failed: authenticate failed, caused by: Request failed with status code 404 from TiDB API:, and there's also an error in the top right of my browser (before I try logging in) that says System Health Check Failed A required component NgMonitoring is not started in this cluster. Some features may not work.. Does anyone have any suggestions?
tidb-tidb-0.log
tidb-pd-0.log
I've attached my logs for TiDB and the PD below as well as the TiDB cluster spec:
TiDB Cluster Spec:
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
annotations:
meta.helm.sh/release-name: tidb
meta.helm.sh/release-namespace: tidb
creationTimestamp: '2022-07-29T21:43:22Z'
generation: 21
labels:
app.kubernetes.io/instance: tidb
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: tidb
app.kubernetes.io/version: 1.16.0
helm.sh/chart: tidb-0.1.0
name: tidb
namespace: tidb
resourceVersion: '413848320'
selfLink: /apis/pingcap.com/v1alpha1/namespaces/tidb/tidbclusters/tidb
uid: 10c7c857-7ad0-48e2-9128-9cc9a970c991
spec:
discovery: {}
enableDynamicConfiguration: true
enablePVReclaim: false
helper:
image: alpine:3.16.0
imagePullPolicy: IfNotPresent
pd:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- pd
topologyKey: kubernetes.io/hostname
baseImage: pingcap/pd:v6.1.0
config: |
[dashboard]
internal-proxy = true
enableDashboardInternalProxy: true
hostNetwork: false
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
nodeSelector:
node-purpose: tidb-pd
replicas: 1
requests:
storage: 10Gi
storageClassName: gp3-tidb
tolerations:
- effect: NoSchedule
key: node-purpose
operator: Equal
value: tidb-pd
pvReclaimPolicy: Retain
schedulerName: default-scheduler
services:
- name: pd
type: ClusterIP
tidb:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- tidb
topologyKey: kubernetes.io/hostname
annotations:
tidb.pingcap.com/sysctl-init: 'true'
baseImage: pingcap/tidb:v6.1.0
config: |
[log]
[log.file]
max-backups = 3
[performance]
tcp-keep-alive = true
hostNetwork: false
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
nodeSelector:
node-purpose: tidb-tidb
podSecurityContext:
sysctls:
- name: net.ipv4.tcp_keepalive_time
value: '300'
- name: net.ipv4.tcp_keepalive_intvl
value: '75'
- name: net.core.somaxconn
value: '32768'
replicas: 1
separateSlowLog: true
service:
exposeStatus: true
type: ClusterIP
slowLogTailer:
image: busybox:1.33.0
imagePullPolicy: IfNotPresent
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 20m
memory: 5Mi
tlsClient: {}
tolerations:
- effect: NoSchedule
key: node-purpose
operator: Equal
value: tidb-tidb
tikv:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- tikv
topologyKey: kubernetes.io/hostname
baseImage: pingcap/tikv:v6.1.0
config: ''
enableNamedStatusPort: true
hostNetwork: false
imagePullPolicy: IfNotPresent
maxFailoverCount: 3
nodeSelector:
node-purpose: tidb-tikv
replicas: 1
requests:
storage: 100Gi
storageClassName: gp3-tidb
tolerations:
- effect: NoSchedule
key: node-purpose
operator: Equal
value: tidb-tikv
timezone: UTC
tlsCluster: {}
topologySpreadConstraints:
- topologyKey: topology.kubernetes.io/zone
version: ''
I'm not familiar with TiDB dashboard, it's another project and the maintainers maybe not noticing the issuses here. Maybe you can AskTUG (tidb user group) @crazycs520 do you have any idea about this?
Ask here https://github.com/pingcap/tidb-dashboard @brianlu-scale