illuminatio
illuminatio copied to clipboard
kubernetes.client.exceptions.ApiException: (422)
Tried this demo on a plain k8s 1.16 cluster on GKE.
Freshly installed illuminatio today, so I presume it's version 1.4.0
(BTW can I query the version via CLI?).
Did a illuminatio clean run
which resulted in the following error.
Starting cleaning resources with policies ['on-request', 'always']
Finished cleanUp
Starting test generation and run.
Generated 48 cases in 1.8740 seconds
FROM TO PORT
default:* *:* *
kube-system:* *:* *
monitoring:* *:* *
production:* *:* *
default:* default:* -*
*:* kube-system:k8s-app=kube-dns -16609
*:* kube-system:* -11776
*:* kube-system:app=traefik -40465
*:* monitoring:* -2791
*:* monitoring:app=prometheus,component=server -56189
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
*:* monitoring:app=prometheus,component=node-exporter -36725
*:* monitoring:app=prometheus,component=alertmanager -53931
*:* monitoring:app=prometheus,component=kube-state-metrics -27247
namespace=kube-system:illuminatio-inverted-app=traefik production:* -3000
production:illuminatio-inverted-run=nosqlclient production:* -mongodb
illuminatio-inverted-production:run=nosqlclient production:* -mongodb
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server production:* -metrics
illuminatio-inverted-production:illuminatio-inverted-run=nosqlclient production:* -mongodb
illuminatio-inverted-namespace=kube-system:app=traefik production:* -3000
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:* -metrics
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:* -metrics
illuminatio-inverted-namespace=kube-system:illuminatio-inverted-app=traefik production:* -3000
production:illuminatio-inverted-run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
illuminatio-inverted-production:run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server production:app.kubernetes.io/name=mongodb -metrics
illuminatio-inverted-production:illuminatio-inverted-run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:app.kubernetes.io/name=mongodb -metrics
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:app.kubernetes.io/name=mongodb -metrics
namespace=kube-system:illuminatio-inverted-app=traefik production:run=nosqlclient -3000
illuminatio-inverted-namespace=kube-system:app=traefik production:run=nosqlclient -3000
illuminatio-inverted-namespace=kube-system:illuminatio-inverted-app=traefik production:run=nosqlclient -3000
*:* kube-system:k8s-app=kube-dns 53
namespace=monitoring:app=prometheus,component=server kube-system:app=traefik 8080
*:* kube-system:app=traefik 80
*:* kube-system:app=traefik 443
namespace=monitoring:app=prometheus,component=server monitoring:app.kubernetes.io/name=kube-state-metrics 8080
namespace=monitoring:app=prometheus,component=server monitoring:app=prometheus,component=node-exporter metrics
namespace=kube-system:app=traefik monitoring:app=prometheus,component=server 9090
*:* monitoring:app=prometheus,component=alertmanager 9093
monitoring:app=prometheus,component=server monitoring:app=prometheus,component=alertmanager *
*:* monitoring:app=prometheus,component=kube-state-metrics 8080
monitoring:app=prometheus,component=server monitoring:app=prometheus,component=kube-state-metrics *
production:run=nosqlclient production:app.kubernetes.io/name=mongodb mongodb
namespace=monitoring:app=prometheus,component=server production:app.kubernetes.io/name=mongodb metrics
namespace=kube-system:app=traefik production:run=nosqlclient 3000
Traceback (most recent call last):
File "/home/myuser/.local/bin/illuminatio", line 8, in <module>
sys.exit(cli())
File "/usr/lib/python3/dist-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/python3/dist-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/lib/python3/dist-packages/click/core.py", line 1163, in invoke
rv.append(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3/dist-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3/dist-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/myuser/.local/lib/python3.8/site-packages/illuminatio/illuminatio.py", line 164, in run
) = execute_tests(cases, orch, cri_socket)
File "/home/myuser/.local/lib/python3.8/site-packages/illuminatio/illuminatio.py", line 212, in execute_tests
) = orch.ensure_cases_are_generated(core_api)
File "/home/myuser/.local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 422, in ensure_cases_are_generated
) = self._find_or_create_cluster_resources_for_cases(cases_dict, core_api)
File "/home/myuser/.local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 358, in _find_or_create_cluster_resources_for_cases
) = self._get_target_names_creating_them_if_missing(target_dict, api)
File "/home/myuser/.local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 282, in _get_target_names_creating_them_if_missing
resp = api.create_namespaced_service(namespace=host.namespace, body=svc)
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 8304, in create_namespaced_service
return self.create_namespaced_service_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 8399, in create_namespaced_service_with_http_info
return self.api_client.call_api(
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request
return self.rest_client.POST(url,
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 274, in POST
return self.request("POST", url,
File "/home/myuser/.local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 233, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (422)
Hey, sorry for answering so late, I'll take a look
There currently is no command showing the version, but you can use pip3 show illuminatio
to see it.
Thanks. I can now confirm it's illuminatio 1.4.0 :-)
$ pip3 show illuminatio
Name: illuminatio
Version: 1.4.0
I cannot reproduce the issue here (I only have a GKE 1.17 cluster readily available), can you rerun illuminatio with debug logging for me? illuminatio -v DEBUG run
The error seems to depend on the netpols applied. I worked with this demo repo and applied the first three netpols described in the readme on a bare GKE 1.16 cluster (you could reproduce with the terraform script provided in the same repo).
Please note the commit ID linked in the demo repo above, because the latest commit results in a different issue (#131).
Find attached the output of illuminatio -v DEBUG run > illuminatio.log 2>&1
illuminatio.log
Hi @schnatterer, thanks for reporting this issue.
The attached log seems to be result of antoher issue (most likely old runners/results were present).
However the API exception could be related to the issue I discovered during #134. There was/is a problem that the name metadata for a port of a k8s service object is not mandatory for a single port, but is mandatory for multiple ports. So if a "dummy" pod + service containing multiple ports is created by illuminatio, this could lead to an API exception.
I will try to verify my assumption later. If my assumption is correct, this issue will be fixed by #134 as well.
I was not able to reproduce this issue using the "newest" version (version of PR #134), but I had to execute some steps of the demo repo manually since the old state used to use Helm2, so maybe I missed something, but at least all NetworkPolicies were applied.
Please note that some of the current NetworkPolicies examples of your repo (https://github.com/cloudogu/k8s-security-demos/tree/master/2-network-policies) are still resulting in a failure for a yet unknown reason. The cause therefor is probably not related to this issue or the named port issue.
I will create a follow-up issue to address this and further improve illuminatio using your examples!
Thanks for investigating this. I will have a closer look at all NetPols in the demo repo next time I'm on it (which will probably be in march), try again with the latest version of illuminatio and file a new issue if need be :wink:
OK, I fixed a little something in my demos and can now provide more accurate steps on how to reproduce this error.
- Clone into cloudogu/k8s-security-demos@47ba277
- Set up a demo cluster using Terraform as described
- Make sure to switch the kube-context to the test cluster :wink:
cd 2-network-policies
./apply.sh
kubectl apply -f network-policies/1-ingress-production-deny-all.yaml
kubectl apply -f network-policies/2-ingress-production-allow-traefik-nosqlclient.yaml
kubectl apply -f network-policies/3-ingress-production-allow-nosqlclient-mongo.yaml
kubectl apply -f network-policies/4-ingress-production-allow-prometheus-mongodb.yaml
kubectl apply -f network-policies/5-ingress-kube-system.yaml
kubectl apply -f network-policies/6-ingress-monitoring.yaml
kubectl apply -f network-policies/7-egress-default-and-production-namespace.yaml
# File 8 needs "templating" see README.md
ACTUAL_API_SERVER_ADDRESS=$(kubectl get endpoints --namespace default kubernetes --template="{{range .subsets}}{{range .addresses}}{{.ip}}{{end}}{{end}}")
cat network-policies/8-egress-other-namespaces.yaml \
| sed "s|APISERVER|${ACTUAL_API_SERVER_ADDRESS}/32|" \
| kubectl apply -f -
kubectl apply -f network-policies/9-egress-specific-ips-example.yaml
# The latest version built on 2021-03-08
docker run -ti -v ~/.kube/config:/kubeconfig:ro inovex/illuminatio@sha256:168eabe393f0ae114e4d58d8deee7ce69a0726dd894b91211e47e2b07501bf00 illuminatio clean run
The result is still the same as in the original posting as far as I can see
Starting cleaning resources with policies ['on-request', 'always']
Finished cleanUp
Starting test generation and run.
Generated 52 cases in 1.9518 seconds
FROM TO PORT
default:* *:* *
kube-system:* *:* *
kube-system:k8s-app=kube-dns *:* 53
kube-system:k8s-app=kube-dns *:* 53
monitoring:* *:* *
production:* *:* *
default:* default:* -*
*:* kube-system:* -24078
*:* kube-system:k8s-app=kube-dns -845
*:* kube-system:app=traefik -21018
*:* monitoring:* -36768
*:* monitoring:app=prometheus,component=server -30450
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server monitoring:app.kubernetes.io/name=kube-state-metrics -8080
*:* monitoring:app=prometheus,component=node-exporter -10097
*:* monitoring:app=prometheus,component=alertmanager -46497
*:* monitoring:app=prometheus,component=kube-state-metrics -34036
illuminatio-inverted-production:run=nosqlclient production:* -mongodb
illuminatio-inverted-namespace=kube-system:illuminatio-inverted-app=traefik production:* -3000
illuminatio-inverted-production:illuminatio-inverted-run=nosqlclient production:* -mongodb
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server production:* -metrics
namespace=kube-system:illuminatio-inverted-app=traefik production:* -3000
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:* -metrics
production:illuminatio-inverted-run=nosqlclient production:* -mongodb
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:* -metrics
illuminatio-inverted-namespace=kube-system:app=traefik production:* -3000
production:run=nginx production:run=nginx -*
illuminatio-inverted-production:run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
illuminatio-inverted-production:illuminatio-inverted-run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
illuminatio-inverted-namespace=monitoring:app=prometheus,component=server production:app.kubernetes.io/name=mongodb -metrics
namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:app.kubernetes.io/name=mongodb -metrics
production:illuminatio-inverted-run=nosqlclient production:app.kubernetes.io/name=mongodb -mongodb
illuminatio-inverted-namespace=monitoring:illuminatio-inverted-app=prometheus,illuminatio-inverted-component=server production:app.kubernetes.io/name=mongodb -metrics
namespace=kube-system:illuminatio-inverted-app=traefik production:run=nosqlclient -3000
illuminatio-inverted-namespace=kube-system:illuminatio-inverted-app=traefik production:run=nosqlclient -3000
illuminatio-inverted-namespace=kube-system:app=traefik production:run=nosqlclient -3000
*:* kube-system:k8s-app=kube-dns 53
*:* kube-system:k8s-app=kube-dns 53
namespace=monitoring:app=prometheus,component=server kube-system:app=traefik 9100
*:* kube-system:app=traefik 80
*:* kube-system:app=traefik 443
namespace=monitoring:app=prometheus,component=server monitoring:app.kubernetes.io/name=kube-state-metrics 8080
namespace=monitoring:app=prometheus,component=server monitoring:app=prometheus,component=node-exporter metrics
namespace=kube-system:app=traefik monitoring:app=prometheus,component=server 9090
*:* monitoring:app=prometheus,component=alertmanager 9093
monitoring:app=prometheus,component=server monitoring:app=prometheus,component=alertmanager *
*:* monitoring:app=prometheus,component=kube-state-metrics 8080
monitoring:app=prometheus,component=server monitoring:app=prometheus,component=kube-state-metrics *
production:run=nosqlclient production:app.kubernetes.io/name=mongodb mongodb
namespace=monitoring:app=prometheus,component=server production:app.kubernetes.io/name=mongodb metrics
namespace=kube-system:app=traefik production:run=nosqlclient 3000
Traceback (most recent call last):
File "/usr/local/bin/illuminatio", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1092, in invoke
rv.append(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/illuminatio/illuminatio.py", line 164, in run
) = execute_tests(cases, orch, cri_socket)
File "/usr/local/lib/python3.8/site-packages/illuminatio/illuminatio.py", line 212, in execute_tests
) = orch.ensure_cases_are_generated(core_api)
File "/usr/local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 422, in ensure_cases_are_generated
) = self._find_or_create_cluster_resources_for_cases(cases_dict, core_api)
File "/usr/local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 358, in _find_or_create_cluster_resources_for_cases
) = self._get_target_names_creating_them_if_missing(target_dict, api)
File "/usr/local/lib/python3.8/site-packages/illuminatio/test_orchestrator.py", line 282, in _get_target_names_creating_them_if_missing
resp = api.create_namespaced_service(namespace=host.namespace, body=svc)
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 6934, in create_namespaced_service
(data) = self.create_namespaced_service_with_http_info(namespace, body, **kwargs)
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/apis/core_v1_api.py", line 7012, in create_namespaced_service_with_http_info
return self.api_client.call_api('/api/v1/namespaces/{namespace}/services', 'POST',
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 330, in call_api
return self.__call_api(resource_path, method,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 163, in __call_api
response_data = self.request(method, url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 371, in request
return self.rest_client.POST(url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 260, in POST
return self.request("POST", url,
File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Audit-Id': '37f5c5ed-0e00-4b3e-bca9-14c703b10647', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Wed, 10 Mar 2021 17:07:23 GMT', 'Content-Length': '491'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service \"illuminatio-test-targetb2h44\" is invalid: [spec.ports[0].name: Required value, spec.ports[1].name: Required value]","reason":"Invalid","details":{"name":"illuminatio-test-targetb2h44","kind":"Service","causes":[{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[0].name"},{"reason":"FieldValueRequired","message":"Required value","field":"spec.ports[1].name"}]},"code":422}
is there any update to this ?
i think this can be solved if the created illuminatio pods contains ports field when is created
Hi @aasyria, I am currently not actively working on this project anymore, but if you want to submit a PR I'll happily review it.