homepage
homepage copied to clipboard
Support for Kubernetes and Longhorn
I've added support for monitoring Kubernetes resources (cpu and memory) as well as service discovery. Storage metrics depend heavily on the underlying storage mechanism, so I've added support for Longhorn as well.
Seems awesome, thanks for all the work!
Now, how the hell can I test this out 😑, I have 0 kubernetes experience (and dont have the time to learn RN)
Hopefully someone else can...
Edit: maybe https://labs.play-with-k8s.com or something like that?
It should be pretty easy to spin up minikube and deploy it there. I'll try to create some simple instructions. Homepage could also use a Helm chart. Perhaps that will be my next PR.
In order to run Homepage in the cluster with discovery and metrics you will need to define a ClusterRole with at least these permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: homepage
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- nodes
verbs:
- get
- list
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- apiGroups:
- metrics.k8s.io
resources:
- nodes
- pods
verbs:
- get
- list
In order to run Homepage in the cluster with discovery and metrics you will need to define a ClusterRole with at least these permissions:
Note to self, ensure this gets added to the docs.
I've created a helm chart which should simplify deployment and testing on kubernetes: https://github.com/jameswynn/helm-charts/tree/main/charts/homepage
What do you think about the approach I've taken with the resources widget? I made resources' backend configurable instead of entirely separating it like glances. Would the other approach be better?
This article explains how to quickly create a local Kubernetes cluster using Docker. If you follow these instructions you will have a cluster that can readily deploy the application. Once a cluster is ready, you can deploy Homepage using my helm chart and Helm v3:
helm repo add jameswynn-charts http://jameswynn.github.io/helm-charts
helm repo update
helm install homepage jameswynn-charts/homepage -f values.yaml
With a values.yaml
file like this:
enableRbac: true
config:
kubernetes:
mode: cluster
widgets:
- resources:
backend: kubernetes
expanded: true
cpu: true
memory: true
ingress:
main:
enabled: true
labels:
homepage/enabled: "true"
annotations:
homepage/name: "Homepage"
homepage/description: "A modern, secure, highly customizable application dashboard."
homepage/group: "My Group"
homepage/icon: "homepage.png"
ingressClassName: "nginx"
hosts:
- paths:
- path: /
pathType: Prefix
If you followed that article then Homepage should be available at http://localhost:81/
@jameswynn there are the following warnings when performing a pnpm install
with your change.
WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142 WARN deprecated [email protected]: this library is no longer supported WARN deprecated [email protected]: Please upgrade to version 7 or higher. Older versions may use Math.random() in certain circumstances, which is known to be problematic. See https://v8.dev/blog/math-random for details.
Can you please resolve these warnings and update your PR? Thanks.
Just pushed more changes and merged in the most recent changes from the main branch. pnpm install
doesn't show any issues on my machine.
Here is a preview of the current Kubernetes widgets. I've made it configurable to either show only the cluster, or all the individual nodes in the cluster. Showing CPU and memory separately was going to take far too much room so I consolidated them.
This is based on the following services.yaml
snippet:
- kubernetes:
cluster:
show: true
cpu: true
memory: true
showLabel: true
label: "cluster"
nodes:
show: true
cpu: true
memory: true
showLabel: true
I really love the names of the machines in your cluster. 😁
I'm able to get the image working, however neither service discovery or the Kubernetes resources widget seem to work for me and both dump layout errors into the pod logs.
Could you post your configmap manifest?
@clbx did you use the helm chart I mentioned, or deploy it directly? I'll post a sample config map shortly.
This is an abbreviated version of the config that I am currently using. Obviously replace the URLs and ignore the ${SECRET_INTERNAL_DOMAIN} bit as that is for my Flux CD configuration. I think the combination of "app" and "namespace" to select applications is probably too simplistic.
apiVersion: v1
kind: ConfigMap
metadata:
name: homepage
data:
bookmarks.yaml: |
---
docker.yaml: |
---
kubernetes.yaml: |
---
mode: cluster
services.yaml: |
---
- Development:
- Gitea:
icon: gitea.png
href: https://gitea.${SECRET_INTERNAL_DOMAIN}
description: Private Github Alternative
app: gitea
namespace: dev
- Drone CI:
icon: drone.png
href: https://drone.${SECRET_INTERNAL_DOMAIN}
description: CI/CD Platform
app: drone
namespace: dev
settings.yaml: |
---
providers:
longhorn:
url: https://longhorn.${SECRET_INTERNAL_DOMAIN}
widgets.yaml: |
---
- kubernetes:
cluster:
show: true
cpu: true
memory: true
showLabel: true
label: "cluster"
nodes:
show: true
cpu: true
memory: true
showLabel: true
- longhorn:
expanded: true
total: true
- search:
provider: duckduckgo
target: _blank
So based on the responses looks like there is no automatic service discovery yet via something like ingress annotations. I will try to look how it is done in e.g. hajimari and maybe take a stab at implementing it.
Actually that is already enabled, as per https://github.com/benphelps/homepage/pull/448#issuecomment-1305720937
You can either specify the entries directly in the services.yaml with app and name fields in order to just get status lookups, or you can annotate the ingress (similarly to hajimari) and have it do full discovery.
@clbx did you use the helm chart I mentioned, or deploy it directly? I'll post a sample config map shortly.
I am not using the helm chart, I wrote my manifests before discovering your fork. Thanks for that, I was writing my configurations incorrectly.
I'm updating my fork to line up with the most recent changes, including the new textual statuses.
I've changed the label prefix for autodetection via ingress from "homepage" to "gethomepage.dev" which is more inline with kubernetes practices. So annotations would look like this now:
ingress:
main:
enabled: true
labels:
gethomepage.dev/enabled: "true"
annotations:
gethomepage.dev/name: "Homepage"
gethomepage.dev/description: "A modern, secure, highly customizable application dashboard."
gethomepage.dev/group: "My Group"
gethomepage.dev/icon: "homepage.png"
ingressClassName: "nginx"
hosts:
- paths:
- path: /
pathType: Prefix
I've also added a new option to provide a podSelector
which takes precedence over the "app" field. This allows for atypical/complex deployments to be better represented. For instance, Longhorn itself won't be fully captured without a podSelector.
Here is an example of a service entry with a podSelector which would show the status of all the components deployed by under the gitea instance. In my case this would include the postgresql and memcached instances.
- Development
- Gitea:
icon: gitea.png
href: https://gitea.${SECRET_INTERNAL_DOMAIN}
description: Private Github Alternative
app: gitea
namespace: dev
podSelector: "app.kubernetes.io/instance=gitea"
I'm not sure what needs to be done for this "cannot build offline" issue. Suggestions?
Can’t tell if Failed to resolve @kubernetes/client-node@>=0.17.1 <0.18.0-0 in package mirror
implies an issue there or it’s about running on buildx
I was able to build it by removing the --offline from the Dockerfile. I'm not sure why we would/wouldn't want that though.
I am happy to test it on my cluster, @jameswynn do you have a published docker image with all changes?
I agree, @jameswynn if you can push an image that would be nice, I'm just running a docker build and pushing that. Using your exact configmap and service discovery nor the kubernetes widgets are working for me.

Not getting any errors in the logs now though.
Here is the image that I use, x86 only: ghcr.io/jameswynn/homepage:kubernetes
@clbx Did you configure the cluster role and service account?
Service auto-discovery, cluster widgets and the podselector all work for me using that image. I'm unsure how I was building it that had issues.


Awesome. I know jameswynn you've put a lot of work into this, as I see it the only barrier is maintenance since that may largely fall to you, so I think the decision on this will have to be up to Ben (who as you noticed has been pretty otherwise occupied). Regardless, appreciate the effort 👏
OK. In the meantime I will try to keep my repo, image, and chart up to date with mainline. If we merge the feature, then we should also consider migrating the chart into this repo as well. I can definitely help with setting that up (pretty straightforward) as well as expanding the documentation for this feature.
Playing around with this on minikube and i'm getting a 500 in the browser (404 on the server). Any ideas what would cause that? RBAC is enabled and the service account exists. My config is just
widgets:
- resources:
# change backend to 'kubernetes' to use Kubernetes integration. Requires RBAC.
backend: kubernetes
expanded: true
cpu: true
memory: true
- search:
provider: duckduckgo
target: _blank
- kubernetes:
cluster:
show: true
cpu: true
memory: true
showLabel: true
label: "cluster"
nodes:
show: true
cpu: true
memory: true
showLabel: true
kubernetes:
# change mode to 'cluster' to use RBAC service account
mode: cluster
docker:
settings:
Playing around with this on minikube and i'm getting a 500 in the browser (404 on the server). Any ideas what would cause that? RBAC is enabled and the service account exists. My config is just
widgets: - resources: # change backend to 'kubernetes' to use Kubernetes integration. Requires RBAC. backend: kubernetes expanded: true cpu: true memory: true - search: provider: duckduckgo target: _blank - kubernetes: cluster: show: true cpu: true memory: true showLabel: true label: "cluster" nodes: show: true cpu: true memory: true showLabel: true kubernetes: # change mode to 'cluster' to use RBAC service account mode: cluster docker: settings:
Oops turns out I needed metrics-server.