awx-operator
awx-operator copied to clipboard
Error Fresh Install On AWS Using Load Balancer
Greetings,
When accessing AWX behind a AWS Load Balancer, the following error is displayed:
I am using the following config:
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: LoadBalancer
hostname: *********.com
ingress_annotations: |
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
ingress_type: Ingress
ingress_path: "/*"
loadbalancer_port: 8080
loadbalancer_protocol: http
route_tls_termination_mechanism: Edge
projects_persistence: true
projects_storage_access_mode: ReadWriteOnce
web_extra_volume_mounts: |
- name: static-data
mountPath: /var/lib/projects
extra_volumes: |
- name: static-data
persistentVolumeClaim:
claimName: static-data-pvc
web_resource_requirements: {}
task_resource_requirements: {}
ee_resource_requirements: {}
The host name in the console error is not the host name in the config fwiw
Additionally, port 8013 is nowhere in the config - it seems to pull that from somewhere else entirely
I was able to get this working - here is our working config - some extra fields required for us but take note of the Service Settings and Ingress Settings
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
image_version: #{AWX.Version}
# User Settings
admin_user: admin-user
admin_email: [email protected]
admin_password_secret: awx-admin-password
# Service Settings
service_type: NodePort
# Ingress Settings
ingress_type: ingress # none ingress or route
hostname: mydomain.example.com
ingress_path: /
ingress_path_type: Prefix
ingress_annotations: |
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/certificate-arn: arn::mycertarn-12345-abcd"
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
loadbalancer_port: 443
loadbalancer_protocol: https
# use for external postgres - Secret for external postgres
postgres_configuration_secret: awx-postgres-configuration
# Redis Settings
redis_capabilities:
- CHOWN
- SETUID
- SETGID
# Web Pod Limits
web_resource_requirements:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi
task_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
ee_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
# Persistant Projects Directory
projects_persistence: true
projects_storage_access_mode: ReadWriteOnce
---
apiVersion: v1
kind: Secret
metadata:
name: awx-admin-password
namespace: awx
stringData:
password: admin
---
# Secret for when using external postgres
apiVersion: v1
kind: Secret
metadata:
name: awx-postgres-configuration
namespace: awx
stringData:
host: "#{Postgres.Host}"
port: "#{Postgres.Port}"
database: "#{Postgres.DB}"
username: "#{Postgres.User}"
password: "#{Postgres.Pass}"
sslmode: prefer
type: unmanaged
type: Opaque
From the annotations, I assume you're using the aws-load-balancer-controller
? On my deployments I've just used the following snippet:
spec:
ingress_type: ingress
hostname: awx.example.com
ingress_annotations: |
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/group.name: "default-alb"
alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-FS-1-2-Res-2020-10"
alb.ingress.kubernetes.io/load-balancer-attributes: "routing.http.x_amzn_tls_version_and_cipher_suite.enabled=true"
I didn't need to change the service_type
and using an external DB I just create the k8s secret following the standard naming schema of <instance name>-postgres-configuration
I didn't need to bother with setting postgres_configuration_secret
either.
With the service account setup properly for aws-load-balancer-controller
I didn't even need to specify the certificate-arn
annotation as it would look it up from available certificates to find one that matched the hostname
. I use the group.name
annotation to allow multiple pods to use the same ALB rather than creating multiples along with using the most restrictive SSL policy. Usine the ssl-redirect
you don't need to use the actions.ssl-redirect
and is much simpler and less prone to error I find.
I also added external-dns
to the mix for extra measure to automatically upsert the Route53 record to point to the ALB.
From the annotations, I assume you're using the
aws-load-balancer-controller
? On my deployments I've just used the following snippet:spec: ingress_type: ingress hostname: awx.example.com ingress_annotations: | kubernetes.io/ingress.class: "alb" alb.ingress.kubernetes.io/scheme: "internet-facing" alb.ingress.kubernetes.io/target-type: "ip" alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]' alb.ingress.kubernetes.io/healthcheck-path: "/" alb.ingress.kubernetes.io/ssl-redirect: "443" alb.ingress.kubernetes.io/group.name: "default-alb" alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-FS-1-2-Res-2020-10" alb.ingress.kubernetes.io/load-balancer-attributes: "routing.http.x_amzn_tls_version_and_cipher_suite.enabled=true"
I didn't need to change the
service_type
and using an external DB I just create the k8s secret following the standard naming schema of<instance name>-postgres-configuration
I didn't need to bother with settingpostgres_configuration_secret
either.With the service account setup properly for
aws-load-balancer-controller
I didn't even need to specify thecertificate-arn
annotation as it would look it up from available certificates to find one that matched thehostname
. I use thegroup.name
annotation to allow multiple pods to use the same ALB rather than creating multiples along with using the most restrictive SSL policy. Usine thessl-redirect
you don't need to use theactions.ssl-redirect
and is much simpler and less prone to error I find.I also added
external-dns
to the mix for extra measure to automatically upsert the Route53 record to point to the ALB.
This is correct - we are using aws-load-balancer-controller and also had external-dns for route53 upsert as well - I left the external postgres and such in just to show what we are using as a production awx.
I am using the following config on operator version 2.5.2, whilst the NodePort is created I get NO alb created. Is there anywhere that I can get debug logs from on the requests that the operator is making to AWS other than the events? Perhaps there is something wrong with my permission setup.. any help here would be great
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
image_version: #{AWX.Version}
# User Settings
admin_user: awx-admin-username
admin_email: [email protected]
admin_password_secret: awx-admin-password
# Service Settings
service_type: NodePort
# Ingress Settings
ingress_type: ingress # none ingress or route
hostname: awx.example.com
ingress_path: /
ingress_path_type: Prefix
ingress_annotations: |
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/target-type: "ip"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/group.name: "default-alb"
alb.ingress.kubernetes.io/ssl-policy: "ELBSecurityPolicy-FS-1-2-Res-2020-10"
alb.ingress.kubernetes.io/load-balancer-attributes: "routing.http.x_amzn_tls_version_and_cipher_suite.enabled=true"
# use for external postgres - Secret for external postgres
postgres_configuration_secret: awx-postgres-configuration
# Redis Settings
redis_capabilities:
- CHOWN
- SETUID
- SETGID
# Web Pod Limits
web_resource_requirements:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi
task_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
ee_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
# extra volume mounts
task_extra_volume_mounts: |
- name: krb5
mountPath: /etc/krb5.conf
subPath: krb5.conf
ee_extra_volume_mounts: |
- name: krb5
mountPath: /etc/krb5.conf
subPath: krb5.conf
extra_volumes: |
- name: krb5
configMap:
defaultMode: 420
items:
- key: krb5.conf
path: krb5.conf
name: krb5.conf
# Persistant Projects Directory
projects_persistence: true
projects_existing_claim: awx-projects-claim
projects_storage_access_mode: ReadWriteOnce
I am trying to deploy awx-operator on eks . Do I need to write yaml files for deploying service-account and alb controller for this or this code will create those. please help i am newbie.
BTW I am trying to deploy awx-operator using flux and kustomization I created a base folder where i specified helmrelease file with namespace 'x' now in overlays i copy pasted the same with namespace 'dev' so now 2 awx operators will be created in 'x' and 'dev'? what is the ideal way to use base and overlays if not different namespaces