helm-nifi icon indicating copy to clipboard operation
helm-nifi copied to clipboard

[cetic/nifi] web UI not loading

Open yossisht9876 opened this issue 2 years ago • 30 comments

Describe the bug trying to run nifi on eks version 1.19 all the pods are running and i can see in the logs that the server is up and running. im using NGINX with aws internal load balancer. web UI is under HTTPS so the url will be https://nifi.xxx.xx.com

Version of Helm and Kubernetes: helm 3 eks version 1.19

What happened: the web UI is not loading and i cant override the nifi.properteis file via the values.yaml file

on the web UI we get :

System Error The request contained an invalid host header [nifixxx.xxx.xx.co] in the request [/]. Check for request manipulation or third-party intercept. Valid host headers are [empty] or:

127.0.0.1
127.0.0.1:8443
localhost
localhost:8443
[::1]
[::1]:8443
nifi-helm-2.nifi-helm-headless.xxx.xxx..xx
nifi-helm-2.nifi-helm-headless.nXX.XXX.XX:8443
0.0.0.0
0.0.0.0:8443

What you expected to happen: im expecting to load the web UI

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know: my value.yaml

---
# Number of nifi nodes
replicaCount: 3

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.14.0"
  pullPolicy: IfNotPresent

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

## @param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  useHostNetwork: null
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"
  serviceAccount:
    create: true
    name: nifi-cluster
    annotations: {}
  hostAliases: []
#    - ip: "1.2.3.4"
#      hostnames:
#        - example.com
#        - example

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config


properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
  algorithm: NIFI_PBKDF2_AES_GCM_256
  externalSecure: false
  isNode: false
  httpsPort: 8443
  httpPort: 8080
  httpHost: nifi-cluster.xxx.xxx.com
  webHttpsHost: nifi-cluster.xxx.xxx.com
  webProxyHost: # <clusterIP>:<NodePort> (If Nifi service is NodePort or LoadBalancer)
  clusterPort: 6007
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo
#    nifi.web.http.host:nifi-cluster.xxx.xxx.com
#    nifi.web.http.port: 8080
  ## Include aditional processors
  # customLibPath: "/opt/configuration_resources/custom_lib"

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  singleUser:
    username: xxxxxx
    password: xxxxxxxx

  ldap:
    enabled: false
    host: ldap://<hostname>:<port>
    searchBase: CN=Users,DC=example,DC=com
    admin: cn=admin,dc=example,dc=be
    pass: password
    searchFilter: (objectClass=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE # How the connection to the LDAP server is authenticated. Possible values are ANONYMOUS, SIMPLE, LDAPS, or START_TLS.
    identityStrategy: USE_DN
    authExpiration: 12 hours

  oidc:
    enabled: false
    discoveryUrl: #http://<oidc_provider_address>:<oidc_provider_port>/auth/realms/<client_realm>/.well-known/openid-configuration
    clientId:
    clientSecret:
    claimIdentifyingUser: preferred_username
    ## Request additional scopes, for example profile
    additionalScopes:

## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
  openldap:
  enabled: false
  persistence:
    enabled: false #true
  env:
    LDAP_ORGANISATION: # name of your organization e.g. "Example"
    LDAP_DOMAIN: # your domain e.g. "ldap.example.be"
    LDAP_BACKEND: "hdb"
    LDAP_TLS: "true"
    LDAP_TLS_ENFORCE: "false"
    LDAP_REMOVE_CONFIG_AFTER_SETUP: "false"
  adminPassword: #ChengeMe
  configPassword: #ChangeMe
  customLdifFiles:
    1-default-users.ldif: |-
        # You can find an example ldif file at https://github.com/cetic/fadi/blob/master/examples/basic/example.ldif
  ## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
  ## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
  ## ref: http://kubernetes.io/docs/user-guide/services/
  ##
# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: ClusterIP
  httpPort: 8080
  httpsPort: 8443
  #nodePort: 30231
  #  httpPort: 8080
  annotations: {}
    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
  #   clientIP:
  #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  hosts:
    - nifi-cluster.xxx.xxx.com
  path: /
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Forwarded-Proto https;
      proxy_set_header X-Forwarded-Port 443;
      proxy_set_header Origin https://nifi-cluster.xxx.xxx.com;
      proxy_set_header Referrer nifi-cluster.xxx.xxx.com;
  #      proxy_set_header 'X-ProxyPort' '80';
  #      proxy_set_header 'X-ProxyScheme' 'http';
  #      proxy_set_header X-ProxyScheme https;
  #      proxy_set_header X-ProxyPort 443;
  #      proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>";

  #    nginx.ingress.kubernetes.io/secure-backends: "true"
  #    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
  #    nginx.ingress.kubernetes.io/session-cookie-name: route



  #    nginx.ingress.kubernetes.io/configuration-snippet: |
  #      proxy_set_header X-Forwarded-Proto https;
  #      proxy_set_header X-Forwarded-Port 443;

  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 2g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"
  imagePullPolicy: "IfNotPresent"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  # storageClass: standard
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 40Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 40Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 50Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 50Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 20Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
  cpu: 800m
  memory: 1Gi
requests:
  cpu: 500m
  memory: 500Mi

logresources:
  requests:
    cpu: 100m
    memory: 100Mi
  limits:
    cpu: 150m
    memory: 150Mi

## Enables setting your own affinity. Mutually exclusive with sts.AntiAffinity
## You need to set the value of sts.AntiAffinity other than "soft" and "hard"
affinity: {}

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
  #   volumeMounts:
  #     - mountPath: /tmp/foo
#       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env:
#  NIFI_WEB_HTTP_PORT: 8080
#  NIFI_WEB_HTTP_HOST: nifi-cluster.xxx.xxx.com
#  NIFI_WEB_HTTPS_PORT: 8443
#  NIFI_WEB_HTTPS_HOST: nifi-cluster.xxx.xxx.com
## Extra environment variables from secrets and config maps
envFrom: []

# envFrom:
#   - configMapRef:
#       name: config-name
#   - secretRef:
#       name: mysecret

## Openshift support
## Use the following varables in order to enable Route and Security Context Constraint creation
openshift:
  scc:
    enabled: false
  route:
    enabled: false
    #host: www.test.com
    #path: /nifi

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: false
  persistence:
    enabled: true
  server: ""
  service:
    port: 9090
  token: sixteenCharacters
  admin:
    cn: admin
  serviceAccount:
    create: false
    name: nifi-ca
  openshift:
    scc:
      enabled: false

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181
  replicaCount: 3

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: false
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

# Configure metrics
metrics:
  prometheus:
    # Enable Prometheus metrics
    enabled: false
    # Port used to expose Prometheus metrics
    port: 9092
    serviceMonitor:
      # Enable deployment of Prometheus Operator ServiceMonitor resource
      enabled: false
      # Additional labels for the ServiceMonitor
      labels: {}
```


what im missing here ? thanks 

yossisht9876 avatar Nov 15 '21 20:11 yossisht9876

Same thing for me. I have a similar setup where I use ingress to expose Nifi Web UI and it shows exactly the same error you have. I even tried kubectl port-forward service/nifi 8443:8443 and navigated to https://localhost:8443 only to get the following error: An error occurred during a connection to localhost:8443. PR_END_OF_FILE_ERROR

It used to work when we had HTTP endpoints though.

leshibily avatar Nov 17 '21 12:11 leshibily

right i have the same errors , do we have the option to add white listed hosts to nifi? i saw users that use NIFI_WEB_PROXY_HOST in order to white list hosts to nifi but it not working for me , i can add it to the pods via env vars but its not adding the hosts to the white list.

any other ideas ? thanks

yossisht9876 avatar Nov 17 '21 12:11 yossisht9876

Does this comment help?

https://github.com/cetic/helm-nifi/pull/169#discussion_r716947725

banzo avatar Nov 17 '21 14:11 banzo

Same here, installing the chart on Openshift, getting same error page.

System Error The request contained an invalid host header [nifi-external-OPENSHIFT-ROUTE-URL] in the request [/]. Check for request manipulation or third-party intercept. Valid host headers are [empty] or: 127.0.0.1 127.0.0.1:8443 localhost localhost:8443 [::1] [::1]:8443 nifi-1.nifi-headless.nifikop.svc.cluster.local nifi-1.nifi-headless.nifikop.svc.cluster.local:8443 10.129.3.209 10.129.3.209:8443 0.0.0.0 0.0.0.0:8443

Idan-Maimon avatar Nov 17 '21 14:11 Idan-Maimon

Does this comment help?

#169 (comment)

After adding the following, I was able to get the UI working with kubectl port-forward command. However, when I try to expose it via an ingress controller, I still get the same error. Any help, folks?

leshibily avatar Nov 17 '21 15:11 leshibily

the web UI works for me after adding this:

nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443" nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443" nginx.ingress.kubernetes.io/proxy-redirect-to: "https://nifi-domain.com"

to the ingress configuration.

yossisht9876 avatar Nov 17 '21 18:11 yossisht9876

the web UI works for me after adding this:

nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443" nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443" nginx.ingress.kubernetes.io/proxy-redirect-to: "https://nifi-domain.com"

to the ingress configuration.

Can you output the Nifi ingress rule in YAML here?: kubectl get ingress <ingress-name> -n <namespace-name> -o yaml

Note: You may hide the Nifi URL if it's confidential.

leshibily avatar Nov 17 '21 19:11 leshibily

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx meta.helm.sh/release-name: nifi-helm meta.helm.sh/release-namespace: nifi-helm nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-ProxyHost https://nifi-xxxx.com; nginx.ingress.kubernetes.io/proxy-redirect-from: https://localhost:8443 nginx.ingress.kubernetes.io/proxy-redirect-to: https://nifi-xxxx.co nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/upstream-vhost: localhost:8443 creationTimestamp: "2021-11-22T09:54:42Z" generation: 1 labels: app: nifi app.kubernetes.io/managed-by: Helm chart: nifi-1.0.2 heritage: Helm release: nifi-helm managedFields:

  • apiVersion: networking.k8s.io/v1beta1 fieldsType: FieldsV1 fieldsV1: f:status: f:loadBalancer: f:ingress: {} manager: nginx-ingress-controller operation: Update time: "2021-11-22T09:55:06Z"
  • apiVersion: networking.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:nginx.ingress.kubernetes.io/backend-protocol: {} f:nginx.ingress.kubernetes.io/proxy-redirect-from: {} f:nginx.ingress.kubernetes.io/proxy-redirect-to: {} f:nginx.ingress.kubernetes.io/upstream-vhost: {} manager: kubectl operation: Update time: "2021-11-22T12:07:55Z"
  • apiVersion: extensions/v1beta1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubernetes.io/ingress.class: {} f:meta.helm.sh/release-name: {} f:meta.helm.sh/release-namespace: {} f:nginx.ingress.kubernetes.io/affinity: {} f:nginx.ingress.kubernetes.io/configuration-snippet: {} f:nginx.ingress.kubernetes.io/ssl-redirect: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/managed-by: {} f:chart: {} f:heritage: {} f:release: {} f:spec: f:rules: {} manager: Go-http-client operation: Update time: "2021-11-22T12:13:14Z" name: nifi-helm-ingress namespace: nifi-helm resourceVersion: "48669666" uid: 142d343d-a944-4e7b-8f1f-3dbbcbc56cbc spec: rules:
  • host: nifi-cluster.dev.lusha.co http: paths:
    • backend: service: name: nifi-helm port: number: 8443 path: / pathType: ImplementationSpecific status: loadBalancer: ingress:
    • hostname: internalxxxxxxxxxxxxxus-east-1.elb.amazonaws.com

yossisht9876 avatar Nov 18 '21 06:11 yossisht9876

the ui works now but every time that I try to click on something on the UI, it send me away to the login page with the error:

Unable to communicate with NiFi
Please ensure the application is running and check the logs for any errors.```


its happen when trying to configure a new processor just "playing" with the menu options.
the pods are up and running and there are no errors in the app log or on any other pod's logs

yossisht9876 avatar Nov 18 '21 06:11 yossisht9876

apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" meta.helm.sh/release-name: nifi-helm meta.helm.sh/release-namespace: nifi-helm nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Origin https://nifi.example.com; proxy_set_header Referrer https://nifi.example.com; nginx.ingress.kubernetes.io/proxy-redirect-from: https://localhost:8443 nginx.ingress.kubernetes.io/proxy-redirect-to: https://nifi.example.com nginx.ingress.kubernetes.io/upstream-vhost: localhost:8443 creationTimestamp: "2021-11-16T09:47:20Z" generation: 3 labels: app: nifi app.kubernetes.io/managed-by: Helm chart: nifi-1.0.1 heritage: Helm release: nifi-helm managedFields:

* apiVersion: networking.k8s.io/v1beta1
  fieldsType: FieldsV1
  fieldsV1:
  f:status:
  f:loadBalancer:
  f:ingress: {}
  manager: nginx-ingress-controller
  operation: Update
  time: "2021-11-16T09:48:06Z"

* apiVersion: extensions/v1beta1
  fieldsType: FieldsV1
  fieldsV1:
  f:metadata:
  f:annotations:
  .: {}
  f:kubernetes.io/ingress.class: {}
  f:kubernetes.io/tls-acme: {}
  f:meta.helm.sh/release-name: {}
  f:meta.helm.sh/release-namespace: {}
  f:nginx.ingress.kubernetes.io/affinity: {}
  f:nginx.ingress.kubernetes.io/backend-protocol: {}
  f:nginx.ingress.kubernetes.io/configuration-snippet: {}
  f:nginx.ingress.kubernetes.io/proxy-redirect-from: {}
  f:nginx.ingress.kubernetes.io/proxy-redirect-to: {}
  f:nginx.ingress.kubernetes.io/upstream-vhost: {}
  f:labels:
  .: {}
  f:app: {}
  f:app.kubernetes.io/managed-by: {}
  f💹 {}
  f:heritage: {}
  f:release: {}
  f:spec:
  f:rules: {}
  manager: Go-http-client
  operation: Update
  time: "2021-11-17T13:49:24Z"
  name: nifi-helm-ingress
  namespace: nifi-helm
  resourceVersion: "44153253"
  uid: gdgdgd-dkdkmd-ddmdxxxxx
  spec:
  rules:

* host: nifi.example.com
  http:
  paths:
  
  * backend:
    service:
    name: nifi-helm
    port:
    number: 8443
    path: /
    pathType: ImplementationSpecific
    status:
    loadBalancer:
    ingress:
  * hostname: internal-xxxxxxxxxxxxxxxx.xxxxxx.xxxxxx2.us-east-1.elb.amazonaws.com

@yossisht9876 this workaround first actually routes to https://nifi.example.com/nifi.example.com and then automatically redirects to https://nifi.example.com/nifi. I don't think this is the right workaround.

Can some look at this issue on priority?

leshibily avatar Nov 20 '21 09:11 leshibily

Hi @banzo, I am installing the Nifi cluster using the latest release as a LoadBalancer service (by changing the type as LoadBalancer in values.yml) , but I am getting the same error as above when trying to access the UI. Can you please help what need to be done. I have also added the properties as mentioned in the comments in safetyValve properties.

Is it possible to run as load balancer and is http still supported?

Also I am not sure what need to be set in webproxyhost. Any advise to make this working is appreciated. Thanks.

Premfer avatar Nov 21 '21 07:11 Premfer

@leshibily i edit my ingress output - please try if it works for you

yossisht9876 avatar Nov 22 '21 13:11 yossisht9876

@leshibily i edit my ingress output - please try if it works for you

That did not work as well. I got the error.

Unable to validate the access token.

leshibily avatar Nov 23 '21 11:11 leshibily

Hi, I was facing the same errors and couldn't tell if they were on the helm chart or at NiFi.

I've deployed a simple nifi using the base docker image from dockerhub, https://hub.docker.com/r/apache/nifi/ .

Then, I've setup a minimal ingress for testing and faced the first problem:

System Error
The request contained an invalid host header [nifixxx.xxx.xx.co] in the request [/]. Check for request manipulation or third-party intercept.
Valid host headers are [empty] or:
127.0.0.1
127.0.0.1:8443
localhost
localhost:8443
[::1]
[::1]:8443
nifi.xxx.xxx..xx
nifi.xxx.xxx..xx:8443
0.0.0.0
0.0.0.0:8443

I fixed this first problem, with the following nginx.ingress.kubernetes.io/backend-protocol annotation:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nifi-ingress-service-internal
  namespace: nifi-test
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  rules:
    - host: nifi.xxx.xx
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nifi-cluster-ip-service
                port:
                  number: 8443

This happens, because the NiFi is running with HTTPS inside the cluster, so, the reverse proxy must be aware of this, and this annotation tell him this. After this setup, I was able to load the web UI and login normally.

After login, I faced the second problem:

Whenever I click on something on the UI I was redirected to a page with the following message:

Unable to communicate with NiFi
Please ensure the application is running and check the logs for any errors.

With some research, I found how to fix this in the oficial NiFi documentation for systems administrators: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#proxy_configuration

So, I updated the ingress definition to the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nifi-ingress-service-internal
  namespace: nifi
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443"
    nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header 'X-ProxyScheme' 'https';
      proxy_set_header 'X-ProxyPort' '443';
spec:
  rules:
    - host: nifi.xxx.xx
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nifi-cluster-ip-service
                port:
                  number: 8443

I hope this can help! M

murilolobato avatar Nov 25 '21 13:11 murilolobato

@murilolobato hi, did it fix the ```Unable to communicate with NiFi Please ensure the application is running and check the logs for any errors.

because i got the same issue

yossisht9876 avatar Nov 25 '21 16:11 yossisht9876

Hi, I was facing the same errors and couldn't tell if they were on the helm chart or at NiFi.

I've deployed a simple nifi using the base docker image from dockerhub, https://hub.docker.com/r/apache/nifi/ .

Then, I've setup a minimal ingress for testing and faced the first problem:

System Error
The request contained an invalid host header [nifixxx.xxx.xx.co] in the request [/]. Check for request manipulation or third-party intercept.
Valid host headers are [empty] or:
127.0.0.1
127.0.0.1:8443
localhost
localhost:8443
[::1]
[::1]:8443
nifi.xxx.xxx..xx
nifi.xxx.xxx..xx:8443
0.0.0.0
0.0.0.0:8443

I fixed this first problem, with the following nginx.ingress.kubernetes.io/backend-protocol annotation:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nifi-ingress-service-internal
  namespace: nifi-test
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  rules:
    - host: nifi.xxx.xx
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nifi-cluster-ip-service
                port:
                  number: 8443

This happens, because the NiFi is running with HTTPS inside the cluster, so, the reverse proxy must be aware of this, and this annotation tell him this. After this setup, I was able to load the web UI and login normally.

After login, I faced the second problem:

Whenever I click on something on the UI I was redirected to a page with the following message:

Unable to communicate with NiFi
Please ensure the application is running and check the logs for any errors.

With some research, I found how to fix this in the oficial NiFi documentation for systems administrators: https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#proxy_configuration

So, I updated the ingress definition to the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nifi-ingress-service-internal
  namespace: nifi
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/upstream-vhost: "localhost:8443"
    nginx.ingress.kubernetes.io/proxy-redirect-from: "https://localhost:8443"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header 'X-ProxyScheme' 'https';
      proxy_set_header 'X-ProxyPort' '443';
spec:
  rules:
    - host: nifi.xxx.xx
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nifi-cluster-ip-service
                port:
                  number: 8443

I hope this can help! M

@murilolobato the nifi URL is redirecting to https://localhost:8443. Do you have any idea? Any help would be appreciated.

leshibily avatar Dec 10 '21 16:12 leshibily

Hi @leshibily ,

I think you are facing the second problem I mentioned. You should check the annotations section of your ingress definition, and ensure you set-up the correct settings according to my example and most important, according to the NiFi System Administrators guide.

If you have already set the same annotations, ensure that the ingress controller you are using does support the annotations. In my example, I'm using the https://kubernetes.github.io/ingress-nginx/ controller, and the example annotations I have provided are compatible with it.

M

murilolobato avatar Dec 11 '21 11:12 murilolobato

Hi @leshibily ,

I think you are facing the second problem I mentioned. You should check the annotations section of your ingress definition, and ensure you set-up the correct settings according to my example and most important, according to the NiFi System Administrators guide.

If you have already set the same annotations, ensure that the ingress controller you are using does support the annotations. In my example, I'm using the https://kubernetes.github.io/ingress-nginx/ controller, and the example annotations I have provided are compatible with it.

M

Please find my nifi ingress rule configuration below.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/configuration-snippet: |-
      proxy_set_header X-ProxyScheme 'https';
      proxy_set_header X-ProxyPort '443';
    nginx.ingress.kubernetes.io/proxy-redirect-from: https://localhost:8443
    nginx.ingress.kubernetes.io/upstream-vhost: localhost:8443
  name: nifi
  namespace: nifi
spec:
  rules:
  - host: nifi-0099.example.com
    http:
      paths:
      - backend:
          service:
            name: nifi
            port:
              number: 8443
        path: /
        pathType: Prefix

The login page loads but once I login to Nifi, it redirects to https://localhost:8443. Did you try logging in? And the ingress controller I use is ingress-nginx (https://kubernetes.github.io/ingress-nginx)

leshibily avatar Dec 11 '21 14:12 leshibily

@leshibily hi, were you able to fix this?

shuhaib3 avatar Dec 14 '21 10:12 shuhaib3

The main problem is that Nifi did not support the NIFI_WEB_PROXY_HOST (webProxyHost in values.yaml file) environement variable in 1.14.0 version.

Could you please try by using this pull request: #206 .

The ingress has also been updated.

zakaria2905 avatar Dec 17 '21 11:12 zakaria2905

@zakaria2905 I tried adding the NIFI_WEB_PROXY_HOST using the following but I am still getting the invalid host header error in the values.yaml

env:
  - name: NIFI_WEB_PROXY_HOST
     value: "nifi.test.example.com"

error: The request contained an invalid host header [nifi.test.example.com] in the request [/nifi]. Check for request manipulation or third-party intercept.

leshibily avatar Dec 21 '21 15:12 leshibily

@leshibily , after pulling PR #206 , I only modify the following lines in values.yaml file:

webProxyHost: nifi.test.local
---
ingress:
  enabled: true
  hosts:
    - nifi.test.local
  path: /
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"

In addition, I enable the minikube ingress addon minikube addons enable ingress I also set my /etc/hosts file by adding the minikube IP address and the domain name (nifi.test.local)

And it works

zakaria2905 avatar Dec 21 '21 21:12 zakaria2905

HI ALL, I am installing the Nifi cluster as a LoadBalancer service but getting below error.

Any update on the issue.

System Error The request contained an invalid host header [IP:8443] in the request [/nifi]. Check for request manipulation or third-party intercept. Valid host headers are [empty] or: 127.0.0.1 127.0.0.1:8443 localhost localhost:8443 [::1] [::1]:8443 nifilb-0.nifilb-headless.namespace.svc.cluster.local nifilb-0.nifilb-headless.namespace.svc.cluster.local:8443 10.7.1.113 10.7.1.113:8443 0.0.0.0 0.0.0.0:8443

arunbabumm avatar Jan 04 '22 06:01 arunbabumm

HI All, There is workaround for this issue. Once yaml deployed, you have to edit statefulset and add env value

  • name: NIFI_WEB_PROXY_HOST value: "nifi.test.example.com". Please let us know if this workaround fix your UI loading issue.

arunbabumm avatar Jan 12 '22 11:01 arunbabumm

@arunbabumm NIFI_WEB_PROXY_HOST is ignored in 1.14.0 , what we did instead is to change it directly in the properties section and we added also some annotation in ingress, the final values.yaml will be:

...
properties:
   webProxyHost: xxx.net
....
ingress:
  enabled: true
  annotations: 
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
...

k8s version: v1.20.13 chart version: 1.0.5 nifi version: 1.14.0

ilyesAj avatar Feb 14 '22 17:02 ilyesAj

Does this comment help?

#169 (comment)

no not working.

musicmuthu avatar Oct 27 '22 08:10 musicmuthu