kdash
kdash copied to clipboard
Improve log streaming for containers
Container logs are streamed now, but its not perfect, there are some issues that needs to be fixed
- [ ] Due to timeout and re-poll, there might be some random missing log lines. May be this can be fixed by recording last update time and sending the elapsed duration as
since_seconds
inLogParams
- [ ] The first 10 lines tailed may be can be buffered to show up immediately
- [x] Randomly the wrong log appears, even though there is explicit check for id and container id is sent in LogParams. Need to investigate
- [ ] Repetitive logging on some workloads like wasm binaries and so on
Would be really cool if the log also supported ANSI color codes - renders them as plaintext today
│Pods (ns: build) [37] -> Containers [1] -> Logs (dev2) | copy <c> | Containers <esc>───────────────────────────────────────│
│>....[K[14:15:35 INFO]: Executed 2 commands from function 'monumenta:items/potion_injector/login'[m │
│>....[K[14:15:35 INFO]: [0;33mRefreshed class for player 'DailyDecay'[m │
│>....[K[14:15:35 INFO]: Executed 171 commands from function 'monumenta:class_selection/anticheat'[m │
│>....[K[14:15:35 INFO]: Set [MusicCooldown] for DailyDecay to 0[m
Really cool tool, thanks for building it. Neat project!
Thanks. I'm not sure if the logs streamed retains the codes. I'll check if its possible
I'm not sure if this is the same issue, but...
When I open the logs for a pod, the log lines that are already there are loaded very slowly. Is that a known issue or shall I provide more information?
@vpartington ya its known, its the second item in the issues TODO list. Its a weird behaviour from the kube rust library which streams the lines, so I should buffer it before rendering may be.
@clux do you have any pointers on improving the log streaming?
For the first bullet point; it might be necessary to handle EOFs and restart logs_stream
somewhere to allow infinite log tailing - like what kube does for regular event watching - but not sure what the best way to do it safely is since we don't get resourceVersions
from tailing logs? If all we got is since_seconds
then you don't have the greatest guarantees for streaming (..but that's also in-line with watches in general on Kubernetes - you're not guaranteed to get every event). There's a related upstream issue https://github.com/kube-rs/kube/issues/1075
Would definitely want to have a better handling for for this in kube. Maybe it makes sense to do something very similar to watcher
where we maybe do an initial logs
call (to get initial data up to a certain point in time/num lines - to help with the second bullet point) and then repeat-logs_stream calls after. Maybe there's also some inspiration to be taken from kubectl logs.
Ya the log streaming part was the most trciky for me. Whatever I implemented feels like a workaround.