charts icon indicating copy to clipboard operation
charts copied to clipboard

[Feature] optional network policy for the operator

Open jcpunk opened this issue 2 years ago • 5 comments

The prometheus node-exporter includes an optional default network policy in their helm chart.

It would be nice if a policy that permits only the required access to the operator could be optionally enabled. https://cloudnative-pg.io/documentation/1.19/security/#exposed-ports

This request specifically ignores any Clusters created by the operator.

For egress perhaps something like:

  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
    - port: 443
      protocol: TCP

jcpunk avatar Apr 26 '23 15:04 jcpunk

Yes indeed having an optional network policy will be helpful. I want to implement this as well, is PR welcomed in this repo? If so maybe I can give it a try

winston0410 avatar Jun 11 '23 19:06 winston0410

@winston0410 sure, PRs are definitely welcome! We kind of try to keep the chart as lean as possible, so unfortunately sometimes we have to reject some PR, but this one feels like a totally valid addition!

phisco avatar Jun 15 '23 06:06 phisco

Sure, this is my first attempt:

https://editor.networkpolicy.io/?id=c0jMGv4TmUc9l0hV

Not 100% sure about the egress, it will be great if someone who know the project better can help

winston0410 avatar Jun 15 '23 07:06 winston0410

I have these egress rules:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-cloudnative-pg-policies
  namespace: cloudnative-pg
spec:
  podSelector:
    matchLabels:
      app: cloudnative-pg-operator
  policyTypes:
  - Egress
  egress:
  - # k8s' coreDNS
    to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
    ports:
    - protocol: UDP
      port: 53

  - # k8s API server for leases
    ports:
    - protocol: TCP
      port: 6443

  - # namespace database
    to:
    - namespaceSelector: {}
      podSelector:
        matchLabels:
          cnpg.io/podRole: instance
    ports:
    - protocol: TCP
      port: 5432 # Postgres
    - protocol: TCP
      port: 8000 # Status

And these for Ingress including metrics collection:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-cloudnative-pg-policies
  namespace: cloudnative-pg
spec:
  podSelector:
    matchLabels:
      app: cloudnative-pg-operator
  policyTypes:
  - Ingress
  ingress:
  - # CnPG webhook server
    ports:
    - protocol: TCP
      port: 9443

  - # VMagent for metrics scraping
    from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app: vmagent
    ports:
    - protocol: TCP
      port: 8080

sando38 avatar Mar 02 '24 22:03 sando38

I do not use a pgbouncer/pooler hence I am not sure if the above will block connections to pgbouncer/pooler ;)

sando38 avatar Mar 02 '24 22:03 sando38