cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
Improving test coverage in pkg/util
/kind bug
This issue is to list out the files which are required to be tested in pkg/util
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/kubeclient.go:33: NewKubeClient 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:40: GetMachinesInCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:65: GetVSphereMachinesInCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:88: GetVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:104: GetVSphereClusterFromVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:124: GetMachinePreferredIPAddress 84.6%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:150: IsControlPlaneMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:157: GetMachineMetadata 87.5%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:223: GetOwnerVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:236: getVSphereMachineByName 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:263: ConvertProviderIDToUUID 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:278: ConvertUUIDToProviderID 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/machines.go:291: MachinesAsString 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/networkutil.go:44: GetNamespaceNetSnatIP 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/networkutil.go:68: GetNCPVersion 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/networkutil.go:84: NCPSupportFW 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/supervisor.go:27: IsSupervisorType 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:49: CreateCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:68: CreateVSphereCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:80: CreateMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:110: CreateVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:131: createScheme 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:143: CreateClusterContext 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:167: CreateMachineContext 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/util/testutil.go:184: GetBootstrapConfigMapName 0.0%
What steps did you take and what happened: [A clear and concise description of what the bug is.]
What did you expect to happen:
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-vsphere version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.