serving
serving copied to clipboard
Knative + Kourier - use with AWS App Mesh
Hi there.
Problem statement: Use knative + Kourier with AWS App Mesh sidecar injection
Detail:
We know its possible to use Istio's Service Mesh (sidecar injection) with Knative. I have also seen resources to do the same with Linkerd.
It would be great if we can get the same for AWS App Mesh.
Ive tried injecting the envoy sidecar with Knative + AppMesh. The sidecar injects and the pod starts up fine with 3/3 containers. Traffic/routing doesnt work as soon as the envoy sidecar is added. I see that as soon as the envoy container is added, the k8s Service - External Name is not being created:
(you will see two services below instead of the 3 knative usually creates, also please note the "Unkowns" and "RevisionMissing")
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-service-01 ClusterIP 172.20.198.104 <none> 80/TCP 11m
service/my-service-01-private ClusterIP 172.20.218.251 <none> 80/TCP,9090/TCP,9091/TCP,8022/TCP,8012/TCP 11m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-service-01-deployment 0/1 1 0 11m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-service-01-deployment-59d9bc7778 1 1 0 11m
NAME LATESTCREATED LATESTREADY READY REASON
configuration.serving.knative.dev/my-service my-service-01 Unknown
NAME URL LATESTCREATED LATESTREADY READY REASON
service.serving.knative.dev/proxy-services http://my-service.proxy-services-dev.example.com my-service-01 Unknown RevisionMissing
NAME CONFIG NAME K8S SERVICE NAME GENERATION READY REASON ACTUAL REPLICAS DESIRED REPLICAS
revision.serving.knative.dev/my-service-01 my-service 1 Unknown Deploying 0 1
NAME URL READY REASON
route.serving.knative.dev/proxy-services http://proxy-services.proxy-services-dev.example.com Unknown RevisionMissing
NAME ARN AGE
virtualnode.appmesh.k8s.aws/my-service arn:aws:appmesh:af-south-1:220716160778:mesh/proxy-services-dev/virtualNode/my-service_proxy-services-dev 11m
My manifests:
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: {{ .Chart.Name }}
namespace: {{ .Values.namespace }}
spec:
template:
metadata:
name: {{ .Chart.Name }}-01
annotations:
autoscaling.knative.dev/minScale: "1"
labels:
application: {{ .Chart.Name }}
spec:
serviceAccountName: {{ .Chart.Name }}
containers:
- name: {{ .Chart.Name }}
image: "***"
ports:
- containerPort: 8443
apiVersion: appmesh.k8s.aws/v1beta2
kind: Mesh
metadata:
name: {{ .Values.namespace }}
spec:
namespaceSelector:
matchLabels:
mesh: {{ .Values.namespace }}
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
labels:
mesh: {{ .Values.namespace }}
appmesh.k8s.aws/sidecarInjectorWebhook: enabled
apiVersion: appmesh.k8s.aws/v1beta2
kind: VirtualNode
metadata:
name: {{ .Chart.Name }}
namespace: {{ .Values.namespace }}
spec:
podSelector:
matchLabels:
application: {{ .Chart.Name }}
Ive tested the same knative service without App Mesh, then traffic routing works fine, it only breaks as soon as I use App Mesh sidecar injection.
This could be a great feature to add to knative. Reason: We run an AWS EKS Fargate cluster. Fargate does not support istio CNI. Its not possible for us to use any service mesh's sidecard injection, on fargate, except for App Mesh. This is a known AWS Fargate limitation.
Using AWS App Mesh with Knative will thus be valuable, for anyone running AWS EKS Fargate clusters.
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
/reopen
@Richardmbs12: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle stale
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
/remove-lifecycle stale
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
This issue or pull request is stale because it has been open for 90 days with no activity.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close
/lifecycle stale
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
I'm going to close this out. This probably requires someone experienced with AppMesh. There's no one in the community who does.
For Istio Mesh to work there's a bunch of logic we have in net-istio ie. setting up peer auth etc that would probably need to happen with AppMesh.