kubespray
kubespray copied to clipboard
Wrong jinja template for Hubble deploy
When I enable hubble, the template generates wrong hubble-deploy.yml file, there are problems with spaces after if/else conditions. This is example when I enable tls. The restartPolicy string has extra spaces. You can check this template with or without tls. Jinja 3.1.2.
- mountPath: /var/lib/hubble-relay/tls
name: tls
readOnly: true
restartPolicy: Always
serviceAccount: hubble-relay
serviceAccountName: hubble-relay
terminationGracePeriodSeconds: 0
My current template hubble/deploy.yml.j2
which generates correct yaml. I add the first line #jinja2: lstrip_blocks: True
and delete -
from if/else blocks:
#jinja2: lstrip_blocks: True
---
# Source: cilium/templates/hubble-relay-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hubble-relay
labels:
k8s-app: hubble-relay
namespace: kube-system
....
{% if cilium_hubble_tls_generate %}
- mountPath: /var/lib/hubble-relay/tls
name: tls
readOnly: true
{% endif %}
restartPolicy: Always
....
{% if cilium_hubble_tls_generate %}
- projected:
sources:
- secret:
name: hubble-relay-client-certs
items:
- key: ca.crt
path: hubble-server-ca.crt
- key: tls.crt
path: client.crt
- key: tls.key
path: client.key
- secret:
name: hubble-server-certs
items:
- key: tls.crt
path: server.crt
- key: tls.key
path: server.key
name: tls
{% endif %}
....
{% if cilium_hubble_tls_generate %}
- name: TLS_TO_RELAY_ENABLED
value: "true"
- name: FLOWS_API_ADDR
value: "hubble-relay:443"
- name: TLS_RELAY_SERVER_NAME
value: ui.{{ cilium_cluster_name }}.hubble-grpc.cilium.io
- name: TLS_RELAY_CA_CERT_FILES
value: /var/lib/hubble-ui/certs/hubble-server-ca.crt
- name: TLS_RELAY_CLIENT_CERT_FILE
value: /var/lib/hubble-ui/certs/client.crt
- name: TLS_RELAY_CLIENT_KEY_FILE
value: /var/lib/hubble-ui/certs/client.key
{% else %}
- name: FLOWS_API_ADDR
value: "hubble-relay:80"
{% endif %}
....
Maybe there is another way to generate yaml without extra spaces, but I didn't find it.
ansible [core 2.14.11]
config file = /home/anutator/.ansible.cfg
configured module search path = ['/home/anutator/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/anutator/.local/lib/python3.11/site-packages/ansible
ansible collection location = /home/anutator/.ansible/collections:/usr/share/ansible/collections
executable location = /home/anutator/.local/bin/ansible
python version = 3.11.2 (main, Oct 13 2023, 04:53:37) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18.0.6)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
$ git rev-parse --short HEAD
e1558d2
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@anutator care to make a pull request for this ?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.