docs.konghq.com
docs.konghq.com copied to clipboard
ARM64 support: some of the images used in docs don't run on arm64
Where is the problem?
https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/using-gateway-api/#set-up-an-echo-server
What happened?
When following the guide on https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/using-gateway-api/#set-up-an-echo-server I deployed https://bit.ly/echo-service manifest (https://gist.github.com/hbagdi/0d833181239a39172ba70cbec080bdb9) which deploys gcr.io/kubernetes-e2e-test-images/echoserver:2.2.
This images does not run on arm64. One can observe the following error in the deployed container's logs:
PANIC: unprotected error in call to Lua API (bad light userdata pointer)
Here's an issue that documents that: https://github.com/kubernetes/ingress-nginx/issues/2802
While we do not control gcr.io/kubernetes-e2e-test-images/echoserver we can certainly use a different image to showcase our solution.
What did you expect to happen?
No error in the deployed container's logs.
Code of Conduct and Community Expectations
- [X] I agree to follow this project's Code of Conduct
- [X] I agree to abide by the Community Expectations
I am trying to follow along with your documentation and failing. I have tried using a substitute echo server but its failing. What do you suggest?
👋
Either we reach out to the community that maintains gcr.io/kubernetes-e2e-test-images/echoserver (which can take some time to figure out and then get a response) or (preferably) we use a different echo server (e.g. https://github.com/postmanlabs/httpbin) that we fork and maintain in Kong's org.
There's already an issue that was created for httpbin to provide arm64 builds but still not luck: https://github.com/postmanlabs/httpbin/issues/643. If you really want to test it out as of today you can try using https://hub.docker.com/r/arnaudlacour/httpbin which is a fork of httpbin that already has arm64 images pushed to Docker hub.
Thanks I followed your advice and used the httpbin image however when following the documentation I see differences in behaviour which I assume are down to differences between echo server and httpbin.
- I have been able to have kong route to httpbin successfully after some modifications of the service definition so it operates on port 80
- I also had to route to httpbin endpoint that would respond - foo for example doesn't respond in httpbin but json does
- When attempting to use the request-id plugin I only see the expected response from the URL and nothing else.

Thanks for your suggestion but this issue doesn't allow learners to follow along with the documentation.
The gcr.io/kubernetes-e2e-test-images/echoserver:2.2 image is actually deprecated, see #kubernetes/k8s.io/issues/1522. Last update was in 2018.
One of the problems is lack of arm support in lua-nginx module (#github.com/kubernetes/ingress-nginx/issues/2802), even thought the image is build for arm.
The new, non deprecated image link is registry.k8s.io/e2e-test-images/echoserver:2.5.
However, it still seems to have some problems on arm64:
$ kubectl logs echo-789f6ccc66-t9mnb
Generating self-signed cert
Generating a RSA private key
.............................................................................................................................................+++++
.+++++
writing new private key to '/certs/privateKey.key'
-----
Starting nginx
2022/06/13 18:56:09 [alert] 14#14: detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html)
2022/06/13 18:56:09 [error] 14#14: lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found:
no field package.preload['resty.core']
no file './resty/core.lua'
no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core.lua'
no file '/usr/local/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/5.1/resty/core.lua'
no file '/usr/share/lua/5.1/resty/core/init.lua'
no file '/usr/share/lua/common/resty/core.lua'
no file '/usr/share/lua/common/resty/core/init.lua'
no file './resty/core.so'
no file '/usr/local/lib/lua/5.1/resty/core.so'
no file '/usr/lib/lua/5.1/resty/core.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file './resty.so'
no file '/usr/local/lib/lua/5.1/resty.so'
no file '/usr/lib/lua/5.1/resty.so'
no file '/usr/local/lib/lua/5.1/loadall.so')
I intended to start work on #4157 but I decided to revisit this one instead since those are connected: the new config that would be used as part of #4157 can already use a new image (whatever is chosen as a replacement for registry.k8s.io/e2e-test-images/echoserver).
Hence my proposal is to use something that resembles the functionality that registry.k8s.io/e2e-test-images/echoserver provides. IMHO it doesn't really matter all that much what it is as long as
- it works on x64 and arm64
- can return pod name and IP
My proposal is in that case to use gcr.io/k8s-staging-ingressconformance/echoserver:v20220815-e21d1a4 which is what Gateway API uses https://github.com/kubernetes-sigs/gateway-api/blob/bfbf87882b6bb83e7bf0846aca29d783002690a5/conformance/base/manifests.yaml#L97
An exemplar manifest that we could use then in the docs could look like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: echo
name: echo
spec:
ports:
- protocol: TCP
name: http
port: 80
targetPort: http
- protocol: TCP
name: https
port: 443
targetPort: https
selector:
app: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20220815-e21d1a4
name: echo
ports:
- containerPort: 8080
name: http
- containerPort: 8443
name: https
env:
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8443"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: echo
resources:
requests:
cpu: 10m
Exemplar response when used with Kong Gateway as proxy:
{
"path": "/",
"host": "172.18.1.1",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"Connection": [
"keep-alive"
],
"User-Agent": [
"curl/7.79.1"
],
"X-Forwarded-For": [
"10.244.0.1"
],
"X-Forwarded-Host": [
"172.18.1.1"
],
"X-Forwarded-Path": [
"/echo"
],
"X-Forwarded-Port": [
"80"
],
"X-Forwarded-Prefix": [
"/echo"
],
"X-Forwarded-Proto": [
"http"
],
"X-Real-Ip": [
"10.244.0.1"
]
},
"namespace": "default",
"ingress": "",
"service": "echo",
"pod": "echo-698488666b-7jcfv"
}
cc: @Kong/team-k8s
Or actually even better, let's use registry.k8s.io/e2e-test-images/agnhost from the set of official kubernetes test images: https://github.com/kubernetes/kubernetes/tree/master/test/images/agnhost#netexec
This way the yaml manifests we use can be something like:
apiVersion: v1
kind: Service
metadata:
labels:
app: echo
name: echo
spec:
ports:
- protocol: TCP
name: http
port: 80
targetPort: http
selector:
app: echo
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo
name: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: registry.k8s.io/e2e-test-images/agnhost:2.40
command:
- /agnhost
- netexec
- --http-port=8080
ports:
- containerPort: 8080
name: http
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
cpu: 10m
and we can use the /hostname path in the guides so that it returns the hostname of the pod it hits
$ curl <KONG_GATEAWY_LOAD_BALANCER_IP>/echo/hostname
echo-658c5ff5ff-n8lcv
Seems that there's an internal effort to replace https://bit.ly/echo-service : https://konghq.atlassian.net/browse/DOCU-2376, so it seems we're on a good path.
CC: @rspurgeon