e2e-framework
e2e-framework copied to clipboard
Constructor packages for building Kubernetes API workload objects
Constructing a graph for Kubernetes objects require tedious steps of specifying each entries in the graph. Often, this forces and requires the code writer to have to look up how to assemble the pieces so that the graph can be constructed properly.
For instance, the following code snippet shows how to programmatically create an object graph for a simple Deployment
. As you can see, it takes many entries to create a complex object structure to satisfy the object graph for Deployment
.
Example pulled from here.
func NewDeployment() *appV1.Deployment {
return &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "my-deployment",
Labels: podLabels,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{MatchLabels: podLabels},
Strategy: appsv1.DeploymentStrategy{
Type: appsV1.RecreateDeploymentStrategyType,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: podLabels,
},
Spec: v1.PodSpec{
TerminationGracePeriodSeconds: &zero,
Containers: []v1.Container{
{
Name: containerName,
Image: image,
SecurityContext: &v1.SecurityContext{},
},
},
},
},
},
}
}
As a test writer, it would be helpful and convenient if there exists an e2e-framework package that makes it easy to easily create objects without any knowledge of graph structure (as is required in the above example). For instance, given a set of constructor packages, such as constructor/deployment
, constructor/pod
, constructor/meta
, etc, the previous object graph could be created as follows:
import (
"sigs.k8s.io/e2e-framework/klient/constructors/deployment"
"sigs.k8s.io/e2e-framework/klient/constructors/meta"
)
func NewDeployment() appsV1.Deployment {
return deployment.Deployment(
meta.Object("test-deployment").Namespace(meta.DefaultNamespace),
deployment.Replicas(2),
meta.MatchLabels(map[string]string{"server-type": "web"}),
deployment.StrategyDefault,
pod.Template(meta.ObjectMetaNone, pod.Spec(container.Name("server").Image("nginx").Commands("/start"))),
)
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The examples you give are deceptive since you can write structs in a single line and span function calls across multiple lines. The top example also specifies more information and is more flexible. It's impossible to reduce the amount of information by moving to a "constructor" model since information density doesn't change. You just get functions instead of structs. Function that has 5-10 parameters and the reader can't have any idea what any positional argument is. Or alternatively doing a function for each field which the same amount of chars as just specifying the field struct.
This problem can't be solved generically since you can't compress the information further. What people usually do in tests is create builder for their common use cases which is also a struct.
deployment, statefulSet := FooTestResources{
NamePrefix: "foo",
Namespace: "default",
}.Build()
This works because in this specific context you know what parts don't need to be tuned or how the different fields relate to each other in your use case.
@ficoos thank you for the thoughtful reply and feedback.
You just get functions instead of structs.
That is the idea. Use function calls to reduce struct verbosity.
Function that has 5-10 parameters and the reader can't have any idea what any positional argument is.
This would be a trade-off of space/density for actually reading the function sig to figure out what goes where.
This problem can't be solved generically since you can't compress the information further.
This won't solve all cases for sure. But there is a great deal of instances (specially when your struct graph is large) that this approach can help with.
Again, thank you for the feed back.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Thank you @ShwethaKumbla I still want to work on this (soon i hope).
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Closing this for now until I can think of a better (code generated way) of doing this instead of manually.