dashboard
dashboard copied to clipboard
Add autocomplete to search
Hey! I was looking to implement autocompletion for searching in the dashboard. Is this something that others want as well and would be good to contribute to upstream? Design/implementation advice would also be appreciated.
Right now I'm thinking of hitting the /search/
endpoint periodically as people type and updating the state but that will be one request per character and seems kind of wasteful.
This kind of search (request per character) could kill dashboard and put too much load on the cluster (specially big clusters with many resources). Autocomplete is quite complex.
- We'd need to build some kind of tree structure (ternary search tree, suffix tree) and store it in cache on backend start.
- Backend would have to periodically check the cluster and update the tree. Ideally we'd need to watch for changes.
- Backend endpoint exposed for search autocomplete should be quick. Ideally not put any load on apiserver.
There is also the matter of how should we sync the cache between multiple dashboard instances?
Add to that that if we do token forwarding we need to make sure that we don't use the cache for someone with a different token to prevent leaking resources you are not allowed to see
Plus if you are hitting /search
endpoint you are downloading all resource data displayed in the lists. You should work with names only.
Hey I think what I actually mean here is 'search as you type' rather than autocompletion where we give you suggestions for what you want. For now I think I'll keep this issue open if we want to discuss autocomplete further. New issue for 'search as you type': #2146
The amount of data that apiserver needs to process for single search request might be too much for 'search as you type' feature. We don't want to overload the cluster.
Autocomplete is still very useful because we would not simply 'suggest' what you want but rather show actual names of resources that match the query. Kind of search limited to resource names. If implemented correctly it would be fast and not overload the cluster.
Hey not sure I understand the difference between using names and getting resources via apiserver. Is there a local list of names somewhere or a more performant way to just get the name of a given resource without all of its associated data?
I guess I wasn't 100% clear. Probably both autocomplete and search as you type would have same impact on apiserver but different on our backend.
In case of autocomplete we only need to store name -> type map to match search query.
Anyway we'd probably need additional pod to act as a "cache-informer" that would watch for resource changes in the cluster and prepare light structure for autocomplete.
Gotcha @floreks that makes a lot of sense. If we implement a cache-informer, any ideas on how to ensure that people can't see information they aren't allowed to like @rf232 mentioned?
Don't have any ideas right now. We'd have to investigate it more and propose some architecture. Unfortunately currently I'm occupied with other tasks.
No problem! I'll get started on some ideas so we can hit the ground running when you have more time.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
@cnwalker @maciaszczykm Hi, I would like to pick this task let me know what is status currently. Also is this a duplicate issue: https://github.com/kubernetes/dashboard/issues/2286.?
No, it's completely different. Let's wait until you solve the previous two issues and then discuss this as it is much bigger.