netbird
netbird copied to clipboard
Kubernetes setup?
I understand there should be docker images of latest release.. do you also have some example of how this can be run on a Kubernetes cluster?
hey @KlavsKlavsen We will prepare an example. I think we could share it by the end of the next week. Sounds good?
Fantastic - and we'll gladly test and give feedback/recommendations :) We would be fine with just a deployment.. we would prefer if no PVC is needed - ie. wireguard information should be in a secret (if sensitive) and rest in configmap. If we get it to work, we'll gladly write a Helm chart to make it easier to install and for you to update (and users to ensure they are updated) - and I can show you how to set it up, so your github repo, also can act as Helm repo if you like.
@braginini Is kubernetes example available (could you help with where to find it)?
hey @josepowera We don't have any Kubernetes examples yet and this is on our to-do list. What is your use case? We are happy to discuss it in Slack. FYI: NetBird can already run in docker, see docs
Once I setup my multimaster environment I can provide a basic example for a Kubernetes setup. It'll just be a yaml file, no helm charts or anything. Currently dealing with containerd issues on latest version of Ubuntu and v1.23.4 of K8s.
In the meantime, if you can provide a docker-compose file with example variables for storage and env vars I will convert it as soon as I'm able to test.
@Slyke did you manage to deal with it? Could you share your configuration for k8s?
I managed to get my K8s cluster back up and running. Haven't tried NetBird yet, but if you have a docker-compose file for it I can attempt it. I couldn't find one in the github repo, it looks like it's generated on the fly.
For any who find this thread after me, this worked fine in k8s.
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird
spec:
selector:
matchLabels:
app: netbird
replicas: 1
template:
metadata:
labels:
app: netbird
spec:
containers:
- name: netbird
image: netbirdio/netbird:latest
env:
- name: NB_SETUP_KEY
value: fooo-reusable-key
volumeMounts:
- name: netbird-client
mountPath: /etc/netbird
resources:
limits:
memory: "128Mi"
cpu: "500m"
securityContext:
privileged: true
runAsUser: 0
runAsGroup: 0
capabilities:
add:
- NET_ADMIN
volumes:
- name: netbird-client
emptyDir: {}
Hey @stobias123 you might want to update your YAML to this
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird
spec:
selector:
matchLabels:
app: netbird
replicas: 1
template:
metadata:
labels:
app: netbird
spec:
containers:
- name: netbird
image: netbirdio/netbird:0.12.0 # <--- Changed, current version.
imagePullPolicy: IfNotPresent # <--- Changed
env:
- name: NB_SETUP_KEY
value: fooo-reusable-key
volumeMounts:
- name: netbird-client
mountPath: /etc/netbird
resources:
requests: # <--- Changed
memory: "128Mi"
cpu: "500m"
securityContext:
privileged: true
runAsUser: 0
runAsGroup: 0
capabilities:
add:
- NET_ADMIN
volumes:
- name: netbird-client
emptyDir: {}
Unless you have specific need, you should always use requests instead of limit.
And the reason for specifying the specific docker version is so that your setup won't break if the developer releases a new version on dockerhub that is not compatible with previous versions.
For any who find this thread after me, this worked fine in k8s.
apiVersion: apps/v1 kind: Deployment metadata: name: netbird spec: selector: matchLabels: app: netbird replicas: 1 template: metadata: labels: app: netbird spec: containers: - name: netbird image: netbirdio/netbird:latest env: - name: NB_SETUP_KEY value: fooo-reusable-key volumeMounts: - name: netbird-client mountPath: /etc/netbird resources: limits: memory: "128Mi" cpu: "500m" securityContext: privileged: true runAsUser: 0 runAsGroup: 0 capabilities: add: - NET_ADMIN volumes: - name: netbird-client emptyDir: {}
Is this about running a netbird client or server?
For any who find this thread after me, this worked fine in k8s.
apiVersion: apps/v1 kind: Deployment metadata: name: netbird spec: selector: matchLabels: app: netbird replicas: 1 template: metadata: labels: app: netbird spec: containers: - name: netbird image: netbirdio/netbird:latest env: - name: NB_SETUP_KEY value: fooo-reusable-key volumeMounts: - name: netbird-client mountPath: /etc/netbird resources: limits: memory: "128Mi" cpu: "500m" securityContext: privileged: true runAsUser: 0 runAsGroup: 0 capabilities: add: - NET_ADMIN volumes: - name: netbird-client emptyDir: {}Is this about running a netbird client or server?
this is for client and not the server
Hi @braginini, Is there any work to provide official Helm Chart (for Netbird Server) with some docs?
I may work on writing a Helm Chart if any of the developers or at least anyone who is a bit familiar with Netbird can work with me on it. At least just to get the basics working then I can work further alone. I should be able to provide it within a few days if anyone can support me. Please anybody interested, let me know.
I'd suggest a netbird-client and a netbird-server helm chart. We only use netbird-client to connect k8s to our VPN network - but the server is placed on a seperate VM - as we'd need that to have access to recover the k8s clusters.. (cannot recover a cluster if it runs the netbird server and we thus cannot access the internal network if its down :) Others may have other use cases ofcourse - but we'll gladly submit a netbird-client helm chart if this project wants to merge it.
Why do all examples here run the container as privileged?
Why do all examples here run the container as
privileged?
Building wg interfaces and mutating the kernel routing table generally requires root permissions, and this deployment is ultimately building an interface on the host machine. The issue here is not simply getting it running, but having it provide more use than just a connection and what is the end goal, which would vary based on your CNI configuration. For a server implementation, I would suggest compiling a list of supported CNI providers and build default functionality, such as IP forwarding, advertisement of the Pod IPpools or Service CIDR...etc. To be honest, NetBird has 90% of the functionality a CNI provides, if you don't mind the cryptographic overhead between K8's Nodes.
This is the yaml file I'm using for the server.
A limitation is that k8s cannot expose a range of ports, so the coturn server has to use the host network and you better set up the IP address for use in the turnserver.conf. Other than that, Worked very well for me with traefik and zitadel.
You basically only need one folder and two config files to run the netbird server: An empty folder or pvc for persistent data storage. management.json and turn.conf, you can find those in /infrastructure_files.
Recommend to un-comment "no-tcp" in the turnserver.conf (Line#388) Not-recommend to run the clients on k8s, because the the k8s cluster network is very much not defended, open a portal inside may not very ideal.
netbird.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird
labels:
app: netbird
app.kubernetes.io/name: netbird
spec:
selector:
matchLabels:
app: netbird
replicas: 1
template:
metadata:
labels:
app: netbird
spec:
containers:
- name: netbird-front
image: docker.io/wiretrustee/dashboard:latest
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: front-http
envFrom:
- configMapRef:
name: netbird
livenessProbe:
tcpSocket:
port: front-http
initialDelaySeconds: 20
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 5
- name: netbird-back
image: docker.io/netbirdio/management:latest
imagePullPolicy: "Always"
securityContext:
runAsUser: 1000
runAsGroup: 1000
ports:
- containerPort: 8180
name: back-http
args: [
"--port", "8180",
"--log-file", "console",
"--disable-anonymous-metrics=false",
"--single-account-mode-domain=example.netbird",
"--dns-domain=example.netbird"
]
volumeMounts:
- name: conf
mountPath: /etc/netbird/management.json
- name: data
mountPath: /var/lib/netbird
livenessProbe:
tcpSocket:
port: back-http
initialDelaySeconds: 20
periodSeconds: 15
timeoutSeconds: 5
failureThreshold: 5
enableServiceLinks: false
volumes:
- name: conf
hostPath:
path: /srv/containers/netbird/management.json
type: File
- name: data
hostPath:
path: /srv/containers/netbird/data
type: DirectoryOrCreate
---
apiVersion: v1
kind: ConfigMap
metadata:
name: netbird
data:
NGINX_SSL_PORT: "8043"
NETBIRD_MGMT_API_ENDPOINT: "https://vpn.example.com"
NETBIRD_MGMT_GRPC_API_ENDPOINT: "https://vpn.example.com"
AUTH_AUDIENCE: "*********@netbird"
AUTH_CLIENT_ID: "*********@netbird"
AUTH_CLIENT_SECRET:
AUTH_AUTHORITY: "https://login.example.com"
USE_AUTH0: "false"
AUTH_SUPPORTED_SCOPES: "openid profile email offline_access api"
AUTH_REDIRECT_URI: "/auth"
AUTH_SILENT_REDIRECT_URI: "/silent-auth"
NETBIRD_TOKEN_SOURCE: "accessToken"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird-signal
labels:
app: netbird-signal
app.kubernetes.io/name: netbird-signal
spec:
selector:
matchLabels:
app: netbird-signal
replicas: 1
template:
metadata:
labels:
app: netbird-signal
spec:
containers:
- name: signal
image: docker.io/netbirdio/signal:latest
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: signal-http
securityContext:
runAsUser: 1000
runAsGroup: 1000
args: ["--log-file", "console"]
volumeMounts:
- name: data
mountPath: /var/lib/netbird
enableServiceLinks: false
volumes:
- name: data
hostPath:
path: /srv/containers/netbird/signal
type: DirectoryOrCreate
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: netbird-coturn
labels:
app: netbird-coturn
app.kubernetes.io/name: netbird-coturn
spec:
selector:
matchLabels:
app: netbird-coturn
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: netbird-coturn
spec:
hostNetwork: true
containers:
- name: coturn
image: docker.io/coturn/coturn
imagePullPolicy: "Always"
securityContext:
runAsUser: 1000
runAsGroup: 1000
args:
- "-c"
- "/etc/turnserver.conf"
- "--log-file=stdout"
volumeMounts:
- name: conf
mountPath: /etc/turnserver.conf
readOnly: true
enableServiceLinks: false
volumes:
- name: conf
hostPath:
path: /srv/containers/netbird/turnserver.conf
type: FileOrCreate
---
apiVersion: v1
kind: Service
metadata:
name: netbird
spec:
selector:
app: netbird
ports:
- name: front-http
port: 80
targetPort: front-http
protocol: TCP
- name: back-http
port: 8080
targetPort: back-http
protocol: TCP
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: netbird-signal
spec:
selector:
app: netbird-signal
ports:
- name: signal-http
port: 80
targetPort: signal-http
protocol: TCP
type: ClusterIP
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: netbird
labels:
app: netbird
spec:
entryPoints:
- websecure
routes:
- match: Host(`vpn.example.com`) && PathPrefix(`/`)
kind: Rule
middlewares:
- name: hsts-header
services:
- name: netbird
port: front-http
scheme: http
- match: Host(`vpn.example.com`) && PathPrefix(`/signalexchange.SignalExchange/`)
kind: Rule
middlewares:
- name: hsts-header
services:
- name: netbird-signal
port: signal-http
scheme: h2c
- match: Host(`vpn.example.com`) && PathPrefix(`/api`)
kind: Rule
middlewares:
- name: hsts-header
services:
- name: netbird
port: back-http
scheme: http
- match: Host(`vpn.example.com`) && PathPrefix(`/management.ManagementService/`)
kind: Rule
middlewares:
- name: hsts-header
services:
- name: netbird
port: back-http
scheme: h2c
tls:
secretName: vpn.example.com
I assume an operator would be needed to do any kind of HA easily..
operator won't work - as you'll need a pod on each node in cluster - if you want to connect it to wireguard vpn. So either CNI extension - or daemonset I'd say.
Hi! I would be also happy to get some k8s native way of installation. Everybody will benefit from it. What is it important to me? Because we don't want to have a dedicated EC2 instances, but rather - put all compute into large k8s and get unified approach for infra management and yes, in that case I will be able to select a particular node to netbird project with some static IP if necessary.