dashboard
dashboard copied to clipboard
Previous and next is not working under logs
Installation method: Kubernetes version: 1.21.1 Dashboard version: 2.3.1 Operating system: Ubuntu 20.04.2
Steps to reproduce Deploy Deployment with colored logs:
apiVersion: apps/v1
kind: Deployment
metadata:
name: color
spec:
selector:
matchLabels:
app: color
template:
metadata:
labels:
app: color
spec:
containers:
- image: ubuntu
name: color
command: ["/bin/bash", "-c", 'while true; do echo -e "Meet the \e[92mcucumber!"; sleep 1; done']
Go to pod logs. Click "<" or ">" button. It will show error as "The selected container has not logged any messages yet".
Observed result
Expected result Both the next and previous buttons shows the respective logs
I have tested it using provided deployment yaml and haven't encountered this issue. Are you sure you didn't check Show previous logs
option?
@floreks
No I didn't choose the "Show previous logs" option. I just tested the same on a kind cluster on my MAC and there it works as expected(In chrome). I am facing this problem in chrome browser on windows, I am not sure what is the reason. As mentioned, it works fine in the old dashboard version - 2.0.1 on our k8s 1.18.3 cluster.
Are there any unusual logs in the dev console? Can you also check API calls?
dev console is empty, no errors or warnings.
Every second click is creating a different query:
:method: GET
:path: /api/v1/log/default/color-55bdd5665b-jpjmb/color?logFilePosition=&referenceTimestamp=2021-07-02T13:24:08.961448936+02:00&referenceLineNum=-1&offsetFrom=2500&offsetTo=2600&previous=false
:method: GET
:path: /api/v1/log/default/color-55bdd5665b-jpjmb/color?logFilePosition=&referenceTimestamp=&referenceLineNum=0&offsetFrom=0&offsetTo=100&previous=false
And the response of this call is what you see in the log viewer?
every second is more or less empty:
{
"info": {
"podName": "color-55bdd5665b-jpjmb",
"containerName": "color",
"initContainerName": "",
"fromDate": "",
"toDate": "",
"truncated": false
},
"selection": {
"referencePoint": {
"timestamp": "",
"lineNum": 0
},
"offsetFrom": 0,
"offsetTo": 0,
"logFilePosition": ""
},
"logs": []
}
the healthy one:
{
"info": {
"podName": "color-55bdd5665b-jpjmb",
"containerName": "color",
"initContainerName": "",
"fromDate": "2021-07-02T14:46:37.200487993+02:00",
"toDate": "2021-07-02T14:48:16.360482711+02:00",
"truncated": false
},
"selection": {
"referencePoint": {
"timestamp": "2021-07-02T14:06:33.223579256+02:00",
"lineNum": -1
},
"offsetFrom": 2400,
"offsetTo": 2500,
"logFilePosition": ""
},
"logs": [
{
"timestamp": "2021-07-02T14:46:37.200487993+02:00",
"content": "Meet the \u001b[92mcucumber!"
},
...
}
Interesting. The parameters are missing. We have to find a way to reproduce this.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen
/remove-lifecycle stale
Installation method: kubeadm Kubernetes version: 1.24.7 Dashboard version: 2.6.1 Operating system: CentOS Linux release 7.7.1908
Same problem.