"Services" incorrecly imply Kubernetes services
Throughout the documentation, Workloads / Pods are referred to as "Services". While this may make sense conceptually (a typical Backend application could be called a "service" - e.g. "microservice"), it is dangerously synonymous with the Kubernetes Service resource.
In practice, the documentation pages make it seem as if Skupper was about exposing Kubernetes Services across clusters, while in reality, only Pods can be selected! Take these instances, for example:
After creating an application network by linking sites, you can expose services from one site using connectors
https://skupper.io/docs/kube-yaml/service-exposure.html
Our simple HTTP application has two services Step 6: Expose your services You now have a Skupper network capable of multi-cluster communication, but no services are attached to it.
https://skupper.io/start/index.html
Personally, it took me about 2-3 hours to figure out this was the reason why my listener kept being in the Pending state. This was further reinforced by the confusing false flag of the Connector showing that it found a Listener, while the Listener complained about "No matching connector". Additionally, the CLI supports the --workload service/<resource-name> flag, which would further imply k8s service-backed Connectors.
"Fixing" this issue is quite complex. A consistent naming would need to be found and implemented throughout numerous pages of documentation. Still, I believe that, given the context of the Skupper project, using "Services" in this way is incredibly dangerous and easily misleading to outsiders. I was certain I was getting k8s-like k8s-service-to-k8s-service communication, up until reading the detail that the Connector's selector "identifies the pods to connect to".
@SIMULATAN wow, and apologies, I feel we messed up here. What we're trying to say is that after creating your network, skupper services can be exposed from one site and consumed from another site. And sites aren't always k8s, so 'service' is meant to be generic somehow. I feel like we did a better job in v1 docs? So, reintroducing a few things from that version:
- 'a service that communicate on the application network' - distinguish from k8s services
- When describing connectors and listeners, be more explicit about what they are bound to (deployments, pods, etc)
- Maybe give k8s services a specific mention
Would that help?
Note that you can expose service workloads, ie --workload service/<resource-name> is supported, I'm not sure why you encountered a problem?
@pwright hi, thank you for your considerations! No need to apologize, I am super grateful for Skupper, as there's a genuine lack of no-bullshit, low-complexity tools to connect k8s clusters.
I can absolutely see where you're coming from. Having worked on the project for a decent while, your perspective certainly differs from mine. Furthermore, if I'm being honest, I don't have a good alternative name either. In fact, while casually talking to my dad about this matter, I accidentally used "service" in the 'incorrect' fashion multiple times, for the lack of a better word..
Your suggestions sound good, I believe that even just adding a simple notice pointing out the distinction would greatly reduce the risk of this happening again. Specifying the bindings would be even better, especially because I still haven't quite figured it out.
Note that you can expose service workloads, ie
--workload service/<resource-name>is supported, I'm not sure why you encountered a problem?
See, that's just what I thought, but the devil is in the detail.
What the CLI is doing isn't to point the Connector to the service, but to extract the Service's selector field and use that:
https://github.com/skupperproject/skupper/blob/a314091062a189d24cf5c4c9157d95ca417ac88a/internal/cmd/skupper/connector/kube/connector_create.go#L179
Now, this.. "hack"..? probably does the job for 99.9% of the use cases. Unfortunately, mine falls within the remaining 0.1%: I want to expose the Kubernetes API service (kubernetes.default.svc) to another Cluster to let ArgoCD connect to it using a ServiceAccount. The Kubernetes service is quite strange in that it doesn't point to a specific Pod. Consequently, no selector can be extracted by the CLI:
To work around this issue, I ended up creating a Proxy container:
- name: socat-proxy
image: alpine/socat:1.8.0.3@sha256:e8fac892e20ed7e6c1c8e2b7cd1c1efb3c00bfd5b2afe7055db11407f5bb73b8
args:
- TCP-LISTEN:6443,fork,bind=0.0.0.0
- TCP:kubernetes.default.svc:443
ports:
- containerPort: 6443
This solution works fine for me, although it certainly is quite the workaround.
I believe it may be possible to use the kube-apiserver Pods in the kube-system namespace, but since I'm trying my best to lock this cluster down as well as possible for educational reasons, I want to avoid that.
Let me know if I can be of any further help. Thanks again and have a great day!
@SIMULATAN thanks again for chiming in.
There's an alternative to your workaround that doesn't need a proxy pod. I'm sure that we could be doing a better job in highlighting this feature and how it differs from the selector based behavior.
The Connector spec has a host field that is mutually exclusive with the selector field. I suspect what you are looking for something like this
$ skupper connector generate --host kubernetes.default.svc kubeapi 443
apiVersion: skupper.io/v2alpha1
kind: Connector
metadata:
name: kubeapi
spec:
host: kubernetes.default.svc
port: 443
routingKey: kubeapi
type: tcp