kube
kube copied to clipboard
Write an in-memory apiserver
What problem are you trying to solve?
A thing that continually comes up in the concept of controller testing is being able to run the reconciler and verify that it does the right thing.
In complex scenarios this is difficult for users to do right now without a semi-functioning apiserver.
We currently recommend using a mock client:
let (mock_service, handle) = tower_test::mock::pair::<Request<Body>, Response<Body>>();
let mock_client = Client::new(mock_service, "default");
and pass that into the reconciler's context where we intercept the api calls and return some reasonable information. See controller-rs's fixtures.rs and controller.rs for test invocations.
It is perfectly possible to do that in tests (and we do) like this particular wrapper around
tower_test::mock::Handle<Request<Body>, Response<Body>> responding to an Event
being POSTed, while also checking some properties of that data:
async fn handle_event_create(mut self, reason: String) -> Result<Self> {
let (request, send) = self.0.next_request().await.expect("service not called");
assert_eq!(request.method(), http::Method::POST);
assert_eq!(
request.uri().to_string(),
format!("/apis/events.k8s.io/v1/namespaces/testns/events?")
);
// verify the event reason matches the expected
let req_body = to_bytes(request.into_body()).await.unwrap();
let postdata: serde_json::Value =
serde_json::from_slice(&req_body).expect("valid event from runtime");
dbg!("postdata for event: {}", postdata.clone());
assert_eq!(
postdata.get("reason").unwrap().as_str().map(String::from),
Some(reason)
);
// then pass through the body
send.send_response(Response::builder().body(Body::from(req_body)).unwrap());
Ok(self)
}
The problem with this approach is that:
- it is verbose (lots of
Request/Body/Response/serde_json::Valuefiddling) - it requires user tests implementing apiserver expected behavior
- it mixes apiserver imitation behavior with test assertion logic
Describe the solution you'd like
Create a dumb, in-memory apiserver that does the bare minimum of what the apiserver does, and presents a queryable interface that can give us what is in "its database" through some type downcasting.
This server could treat every object as a DynamicObject storing what it sees in a HashMap<ObjectRef, DynamicObject> as an initial memory backing.
If this was made pluggable into tower_test::mock, users can hook it into tests around a reconciler without the tests failing due to bad apiserver responses, without having to have users know all the ins and outs of apiserver mechanics (and crucially without giving them the opportunity to get this wrong).
Implementation
We would need at the very least implement basic functionality around metadata on objects:
- POSTs need to fill in plausible
creationTimestamp,uid,resourceVersion,generation, populatenamefromgenerateName - and prevent clients from overriding read-only values like
creationTimestamp/uid/resource_version/generation - Respond to queries after storing the query result in the
HashMap - DELETEs need to traverse
ownerReferences
Implementing create/replace/delete/get calls on resources plus most calls on subresources "should not be too difficult" to do in this context and will benefit everyone.
The real problem here would be implementing patch in a sensible way:
- json patches need to actually act on the dynamic object
- apply patches need to follow kubernetes merge rules and actually do what the apiserver does
- merge patches, strategic merges with patch strategies need to be followed
Some of this sounds very hard, but it's possible some of it can be cobbled together using existing ecosystem pieces like:
- json patches from json-patch library
- apply patches through k8s_openapi::DeepMerge (but it might force us out of dynamic objects for merge rules)
Anyway, just thought I would write down my thoughts on this. It feels possible, but certainly a bit of a spare time project. If anyone wants this, or would like to tackle this, let us know. I have too much on my plate for something like this right now, but personally I would love to have something like this if it can be done in a sane way.
Documentation, Adoption, Migration Strategy
can be integrated into controller-rs plus the controller guide on kube.rs as a start.
NB: Prior discussion around envtest https://github.com/kube-rs/kube/issues/382 went down a separate route of relying on cluster provisioning to do more integration style tests. This is a heavy weight solution that either re-uses the test environment or sets up one cluster per test, so we currently have left this style of test automation to CI actions provisioning a test cluster.
It is currently pretty easy to do this for a small amount of integration tests. We use this approach, and it does not require mock clients, but it does force us to consider how some tests might interact with others (limiting how deeply we can use this approach). This issue is instead trying to turn more complicated integration tests into true unit tests for users. Both style of tests have a place.
cc @chuckhend @sjmiller609
A perhaps more promising, and less work-involving way forward here for integration tests is to lean on; https://github.com/kubernetes-sigs/kwok/
kwok still creates some kind of cluster, but it seems to mock out a lot of the more node/pod related behaviour, and as such it could become a slightly more reliable integration method / serve as a de-facto mock server (that's kind of a real apiserver).
We have not done any particular testing of this yet (and it's currently in a super early release), but noting this down. If you are doing any experiments with kwok for kube please share!
Have closed our older kube-test / envtest proposals in the past.
Thanks for the suggestion on using KWOK @clux. It's working really well on my tests, and I spin up multiple clusters of it using testcontainers. If you would like I could create a PR adding documentation about it.
Another big fan of Kwok + TestContainers here. It works incredibly well for our testing against Kube Client API.
We deleted the prior Kube mocking efforts in favour of Kwok and haven't looked back.
Do any of you have some good links to CI setups using kube + kwok? My main concern of using this over unit tests / mock integration tests is that you don't have the same level of test isolation out of the box? Or is this something that can be fixed trivially elsewhere?
I'm also looking for a way to simplify unit testing. Do you think an interface similar to mockito adapted to kube/k8s would also work? I made a small proof of concept that looks like the following:
let mut pod = Pod::default();
pod.metadata.name = Some("foo".into());
let mut created_pod = pod.clone();
created_pod.metadata.creation_timestamp =
Some(Time(Utc.with_ymd_and_hms(2014, 7, 8, 9, 10, 11).unwrap()));
let mock = MockClient::default();
mock.namespaced::<Pod>("default")
.create(&PostParams::default(), &pod)
.respond_with(&created_pod)
.build();
let client = Client::new(mock, "default");
let api = Api::<Pod>::namespaced(client, "default");
let p = api.create(&PostParams::default(), &pod).await.unwrap();
assert_eq!(created_pod, p);
The MockClient also checks that method, uri, etc of the request match the expectation. Writing tests would still be a little verbose, but it gives users full control over how the mocked api server behaves and is less complicated than mimicking the api server.
I'm happy to work on this and open a PR, if you think that this makes sense.
@clux is this issue still a thing/relevant. (Context: I am going through issues seeing whether I can pick up something and help out)