ampernetacle icon indicating copy to clipboard operation
ampernetacle copied to clipboard

Workaround for External-ip & Storage class

Open fserve opened this issue 2 years ago • 9 comments

I'm very new to Kubernetes, so i'm using this project to learn something and i could make storage class and external ip work with this two projects. They are very easy to deploy and seems to be well know in the community.

Just the address pool using the local ip in metallb that made me got some problem to understand how to make it work.

fserve avatar Feb 06 '22 18:02 fserve

Oh nice!

I think I'd love to add a bit of explanations; just to make sure I understand:

  • longhorn gives us a storage class, which means we can create persistent volumes (is that correct?)
  • would you have a short example of what we can do with that and metallb?

Thank you!!!

jpetazzo avatar Feb 07 '22 17:02 jpetazzo

Sure!

Yes, you are correct for longhorn explanation. And for metallb i could use that to access the loadbalance without the need to pass the nodeport.

So, here it is my config for metallb and for longhorn. (it's named wp-pv-claim because i'm testing for wordpress using this example just changing to use longhorn)

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.0.11/32
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: longhorn
  resources:
    requests:
      storage: 20Gi

That's my config, it seems to be working perfect, for now i'm studying cert-manager to make automatic let's encrypt ssl certificates.

fserve avatar Feb 07 '22 21:02 fserve

Isso é realmente fantástico, estou testando.. só tem um detalhe, estamos tendo um warning para um recurso obsoleto:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+

jeanclei avatar Feb 08 '22 12:02 jeanclei

Hello, i have completed the metallb example: https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal-clusters

I needed to apply cert-manager, ingress-nginx, and metal lb to the cluster: kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/baremetal/deploy.yaml

At the end of this example you should have access to: "https://mydomain.com" with lets encrypt ssl registed to "[email protected]".

apply each file with kubectl apply -f filename.yaml

config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 10.0.0.11/32 #i dont know if it will be this ip for everyone.
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    email: [email protected]
    server: https://acme-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
---
apiVersion: v1 #this entire section i dont know if i've made it in the best way.
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Cluster #for now i dont know how to make it work with local.
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer #it is important to apply this config for ingress-nginx just to change this flag

deployment.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: mynamespace #this isnt really needed
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  namespace: mynamespace
spec:
  type: ClusterIP #service need to use ClusterIP
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
  selector:
    app: nginx-pod
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: mynamespace
  annotations:
    kubernetes.io/ingress.class: "nginx"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "session"
    nginx.ingress.kubernetes.io/affinity-mode: persistent
    #cert-manager.io/issue-temporary-certificate: "true" # temporary cert required for nginx to be able to reload
    acme.cert-manager.io/http01-edit-in-place: "true" # important to merge with existing ingress resource into a single nginx config file
    #ingress.kubernetes.io/ssl-redirect: "false" # avoid http > https redirect ( acme-challenge was still successful even with the redirect enabled )
spec:
  tls:
  - hosts:
      - mydomain.com
    secretName: mydomain-cert
  defaultBackend:
    service:
      name: nginx-svc
      port:
        number: 80
  rules:
  - host: mydomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-svc
            port:
              number: 80
status:
  loadBalancer:
    ingress:
    - ip: 10.0.0.13 #i dont know if it will be this ip for everyone.
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: mynamespace
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-pod
  template:
    metadata:
      labels:
        app: nginx-pod
    spec:
      containers:
      - name: nginx-container
        image: nginx:1.14.2
        ports:
        - name: http
          containerPort: 80

fserve avatar Feb 10 '22 14:02 fserve

Hi @fserve! Thank you for this receipt. It has created a load-balancer in OCI?

leoribeiro2 avatar Feb 13 '22 01:02 leoribeiro2

I used this solution but with this detail in the last line to expose the Ingress Controller on the external IP of the master node:

apiVersion: v1 #this entire section i dont know if i've made it in the best way.
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Cluster #for now i dont know how to make it work with local.
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer #it is important to apply this config for ingress-nginx just to change this flag
  externalIPs:
  - 150.230.69.221

jeanclei avatar Feb 13 '22 12:02 jeanclei

Hello, I opened a new pull request containing changes in cloudinit so that when deploying the cluster, it already contains all the ingress controller and longhorn configuration suggested by @fserve

from my tests, it was perfect, I was able to expose the ingress controller directly by the master node's public IP! :)

jeanclei avatar Feb 13 '22 16:02 jeanclei

Hi @fserve! Thank you for this receipt. It has created a load-balancer in OCI?

no, it will not use oci load balancer!

But yes it will load balance incoming traffic to your nodes through metallb.

fserve avatar Feb 13 '22 16:02 fserve

Hello, I opened a new pull request containing changes in cloudinit so that when deploying the cluster, it already contains all the ingress controller and longhorn configuration suggested by @fserve

from my tests, it was perfect, I was able to expose the ingress controller directly by the master node's public IP! :)

Cool, thank you!

fserve avatar Feb 13 '22 16:02 fserve