Can't seem to make it work with existing ingress in GKE
I'm trying to get Verdaccio running in an existing cluster, which is working for other services e.g. a pypiserver with the following service yaml (this one works in my cluster, note ClusterIP is not defined in my local service yaml, it's being filled in when the service is created with kubectl apply):
spec:
clusterIP: 10.4.13.73
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: pypiserver
sessionAffinity: None
type: ClusterIP
And, when I look at the service page for this one, it shows port of 80 and target port of 8080.
But, Verdaccio installed with helm, while it can be reached if I setup a local port forward, isn't working with the ingress on the public IP. The verdaccio yaml in GKE looks like (this service does not work, despite looking, to me very similar to the above):
spec:
clusterIP: 10.4.6.96
ports:
- port: 4873
protocol: TCP
targetPort: http
selector:
app: verdaccio
release: npmserver
sessionAffinity: None
type: ClusterIP
When I look at the service page for this one, it shows port of 4873 and target port of 0, so that feels like maybe a problem, somehow. I don't see any way to explicitly set targetPort (and it seems like that ought to be automagic since helm is setting up the service and knows where it runs better than I do, anyway). I think I'm misunderstanding something, but I can't find any clues for what after pretty extensive googling. I'm still pretty new to Kubernetes, though, so it may be obvious to someone else what I'm doing wrong.
My values.yaml contains:
service:
annotations:
clusterIP: ""
externalIPs:
loadBalancerIP: ""
loadBalancerSourceRanges: []
port: 4873
type: ClusterIP
# nodePort: 31873
I've also tried explicitly setting the externalIPs and loadBalancerIP, but that didn't seem to work either, and those aren't specified for my other services that are working, AFAICS, so I don't think they should be needed here, either?
Anybody have a clue they can lend me for how this is supposed to be configured with a GKE Ingress?
Hey! Did you configure the ingress section of your values.yaml? Specifically, did you enable the ingress? - https://github.com/verdaccio/charts/blob/master/charts/verdaccio/values.yaml#L44
You can check if the ingress exists with kubectl get ingress -n <your_namespace>
I did not! I thought that meant it would create an ingress, but I already have the ingress?
Did you create it independently? You seem to have configured the service, but not necessarily the ingress. You can try configuring it in the values.yaml file like:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "<your_ingress_controller>"
hosts:
- <your.host.com>
The ingress is preexisting and serves a bunch of other services.
I'll try that.
But, I'm not sure how to find out what the ingress.class should be? It's not nginx. It's the GKE...native ingress thing, I don't even know what it's called and googling isn't helping here (I didn't set this cluster up).
I noticed some of my (well, not mine, my company's) other services have an annotation of cloud.google.com/neg: '{"ingress": true}', but some do not.
OK, maybe it's gce-internal, I'm trying that.
As an aside, is there a way to reload configuration from values.yaml of the running pod without uninstall/installing it again? helm docs aren't helping me on that.
Hmm, that seems to have created a new ingress, which is not what I need to happen. I guess I'm still not getting something fundamental here.
That will depend on the tools you're using. Assuming it's Helm, something like helm upgrade --reuse-values <release_name> verdaccio/verdaccio -n <namespace> --set ingress.enabled=true should work.
Yes, it's helm. I don't want to reuse values, though? I want to apply the new ones? I'm trying to test different settings without having to tear down and build up again.
The upgrade command should replace the values after --set, so you don't have to redeploy just to change one value. I think you can also do it with a file, instead of the --set flag. Please check Helm documentation.
It doesn't seem to be an issue with Verdaccio, though. You don't necessarily need to create the ingress from the Helm chart, if you have an independent one, and it's properly configured. Can you discuss that with your cluster manager?
Yeah, we're (me and the person who created the cluster/ingress and other services) currently both stumped.
I definitely don't need an ingress to be created by helm. I just can't figure out how to make it talk to the existing one. I guess I need to download the whole chart and modify that targetPort to see if I can make it act like our existing services (they are not configured with helm, just kubernetes yaml files). I'm kind of out of ideas for what else could be preventing it from working when local port forwarding to 4873 does work.
Just so I'm clear, the ingress: true is (as I originally thought) used to make helm create an ingress, correct? So, since I need to use an existing one, I should go back to false for that.
Yes, if you have an ingress already you may use it, if properly configured. In that case, keep the ingress from the chart disabled.
Thank you! I'll keep banging on it. I don't think I'm significantly further ahead of where I was before, but I guess I have a next step I can try (changing port and targetPort in the templates). Gonna sleep some and hope it makes more sense in the morning.
Hey y'all I've done a lot of work with the GKE ingress controllers so should be able to help untangle this for you. First @swelljoe when you say an ingress already exists do you mean an ingress controller or did you create an ingress resource for Verdaccio already. If so can you share it, feel free to remove anything sensitive of course.
Regarding the service; any service you expose in GKE with an ingress should have the service annotation for cloud.google.com/neg: '{"ingress": true}' unless you are using a GKE version that does it automatically. I've left it off and things work... mostly, but talking to the google team it's better to have as you get container native load balancing. More here https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing
For the service values. In general I would strongly recommend not specifying the stuff you aren't using and perhaps even consider leaving out anything that's the same as the default since they will be merged. Since the default is a ClusterIP service you really only need to specify the annotations. It's possible that some of the default empty values are triggering template paths they shouldn't which may be our bug. We really shouldn't spec them in the default either.
service:
annotations:
loud.google.com/neg: '{"ingress": true}'
In the event you don't have the ingress resource already this should get you what you want
ingress:
enabled: true
annotations:
# if you want a VPC internal ingress, for external either remove the annotation or use "gce"
kubernetes.io/ingress.class: "gce-internal"
hosts:
- <your.host.com>
paths:
- "/*" # this is due to gce ingress needing a glob where nginx ingress doesn't
This is all assuming of course we don't have any chart bugs which is quite possible as well of course
Ah you may also need kubernetes.io/ingress.allow-http: "false" as an annotation on the ingress depending on TLS config. Easiest way to tell is describe the ingress and it'll tell you to add it.
Thanks, @kav . I'm working with an existing ingress.
I'd already gone back to having the cloud.google.com/neg: '{"ingress": true}' bit in there (I found some stuff on StackOverflow regarding that and it seems necessary, so I kept it). That didn't change anything about the problem I have getting ingress to connect.
My ingress looks like this (condensed and sanitized):
apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: npmserver.myhost.com
http:
paths:
- backend:
serviceName: npmserver-verdaccio
servicePort: 4873
path: /*
tls:
- secretName: my-secret
Which works for all of our other services, but not this one. The service shows OK, but the ingress treats it as unhealthy.
One thing to note is all of our other services have servicePort: 80. I tried that for Verdaccio but it doesn't seem to make a difference...and I think I'd need to change the port Vedaccio runs on to make that work and don't see how to do that (when I change the service.port in values.yaml, even a local port forward does not work, so I think there's some configuration missing).
Is this a typo in the ingress, or just in your comment? serviceName: vpmserver-verdaccio I'm guessing it should be serviceName: npmserver-verdaccio.
Er, just a typo in the comment. It was correct in the actual ingress.
My ingress, created from the Helm chart, for AWS/EKS, looks something like this:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: npm.domain.com
http:
paths:
- backend:
serviceName: verdaccio-verdaccio
servicePort: 4873
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- npm.domain.com
secretName: sec-npm
status:
loadBalancer:
ingress:
- hostname: domain.amazonaws.com
Can I ask what your helm-generated Service looks like? I suspect that's where mine is falling down...my working services look like (stripped of non-ingress related stuff):
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
name: pypiserver
spec:
clusterIP: 10.4.13.73
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: pypiserver
type: ClusterIP
While the helm-generated one looks like this (port: 4873 and targetPort: http is where my confusion lies...since all my working ones have port 80 targetting the service port, like 8080, where this chart is producing a port of 4783 and target port of http which seems to become 0 as far as GKE is concerned):
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
name: npmserver-verdaccio
selfLink: /api/v1/namespaces/production/services/npmserver-verdaccio
spec:
clusterIP: 10.4.7.145
ports:
- port: 4873
protocol: TCP
targetPort: http
selector:
app: verdaccio
release: npmserver
type: ClusterIP
Could this be a namespace issue? Is your service in the same namespace as the ingress? My service looks pretty similar to yours.
Yes, namespace is definitely correct.
Can you share the output of kubectl get ingress -n <namespace> and kubectl get svc -n <namespace>?
svc (all the unrelated stuff removed):
npmserver-verdaccio ClusterIP 10.4.7.145 <none> 4873/TCP 48m
pypiserver ClusterIP 10.4.13.73 <none> 80/TCP 174d
ingress is complicated and not sensible looking enough to post (it's got a bazillion hosts, as it's providing ingress for a bunch of services), but I can say with 100% certainty that npmserver-verdaccio is among them. I notice it has PORTS of 80, 443 and all of my other (working) services are also using port 80.
Just for completeness, ingress cleaned up and sanitized:
NAME HOSTS ADDRESS PORTS AGE
my-ingress npmserver.mydomain.com,pypiserver.mydomain.com + 47 more... 34.99.99.99 80, 443 574d
Hum, looks similar to mine. You may want to further sanitize and remove the public IP address from your last comment.
Can you replace targetPort: http from your service with targetPort: 4873? Seems to get its name here https://github.com/verdaccio/charts/blob/master/charts/verdaccio/templates/deployment.yaml#L43
By the way, does npmserver.mydomain.com exist? Is it registered/does your DNS service translate it properly? Sorry, there's just a lot of options where this can be wrong.
Yes. DNS is correct.
And, yeah, that's my next thing to try (replacing targetPort and port in the templates) as that's the only thing I can see that differs from my other services that work. I don't know how to do that, still reading helm docs about how to install from a local chart directory or how to override one template file with a local one. (This is the first time I've ever used helm, still learning.)