cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
Improving test coverage in pkg/services
/kind bug
Improving test coverage in pkg/services
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/rule.go:35: IsMandatory 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/rule.go:39: Disabled 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/rule.go:46: negate 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/rule.go:50: VerifyAffinityRule 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/rule.go:67: listRules 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/service.go:34: ListHostsFromGroup 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/vmgroup.go:27: FindVMGroup 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/vmgroup.go:54: Add 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/vmgroup.go:72: HasVM 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/cluster/vmgroup.go:83: listVMs 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/context.go:34: String 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/create.go:25: createVM 66.7%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/errors.go:31: Error 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/errors.go:38: isNotFound 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/errors.go:47: isFolderNotFound 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/errors.go:56: isVirtualMachineNotFound 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/errors.go:65: wasNotFoundByBIOSUUID 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/net/net.go:50: GetNetworkStatus 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/net/net.go:97: ErrOnLocalOnlyIPAddr 81.8%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:53: ReconcileVM 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:153: DestroyVM 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:226: reconcileNetworkStatus 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:235: reconcileMetadata 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:262: reconcilePowerState 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:298: reconcileStoragePolicy 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:374: reconcileUUID 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:378: getPowerState 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:396: getMetadata 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:440: setMetadata 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:456: getNetworkStatus 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:474: getBootstrapData 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:497: reconcileVMGroupInfo 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/service.go:526: reconcileTags 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:39: sanitizeIPAddrs 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:60: findVM 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:101: getTask 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:116: reconcileInFlightTask 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:173: reconcileVSphereVMWhenNetworkIsReady 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:223: reconcileVSphereVMOnTaskCompletion 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:267: reconcileVSphereVMOnFuncCompletion 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:290: reconcileVSphereVMOnChannel 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:332: waitForMacAddresses 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:359: getMacAddresses 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/util.go:389: waitForIPAddresses 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:44: Clone 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:266: newVMFlagInfo 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:273: getDiskLocators 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:291: getDiskSpec 95.2%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:329: getDiskConfigSpec 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/vcenter/clone.go:345: getNetworkSpecs 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:56: DummyNetworkProvider 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:60: HasLoadBalancer 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:64: ProvisionClusterNetwork 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:68: GetClusterNetworkName 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:72: ConfigureVirtualMachine 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:76: GetVMServiceAnnotations 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:80: VerifyNetworkStatus 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:89: DummyLBNetworkProvider 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:93: HasLoadBalancer 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:101: NetOpNetworkProvider 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:107: HasLoadBalancer 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:111: ProvisionClusterNetwork 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:116: getDefaultClusterNetwork 66.7%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:135: getClusterNetwork 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:140: GetClusterNetworkName 75.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:149: GetVMServiceAnnotations 75.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:158: ConfigureVirtualMachine 87.5%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:179: VerifyNetworkStatus 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:197: NsxtNetworkProvider 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:204: HasLoadBalancer 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:209: GetNSXTVirtualNetworkName 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:213: verifyNSXTVirtualNetworkStatus 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:228: VerifyNetworkStatus 75.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:237: ProvisionClusterNetwork 72.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:296: GetClusterNetworkName 83.3%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:309: GetVMServiceAnnotations 75.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/network/network.go:319: ConfigureVirtualMachine 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:45: FetchVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:52: FetchVSphereCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:68: ReconcileDelete 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:94: SyncFailureReason 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:113: ReconcileNormal 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:176: findVMPre7 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:190: waitReadyState 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:218: reconcileProviderID 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:261: reconcileNetwork 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:310: createOrUpdateVSPhereVM 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:395: generateOverrideFunc 91.3%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vimmachine.go:437: overrideNetworkDeviceSpecs 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:50: ReconcileControlPlaneEndpointService 71.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:101: controlPlaneVMServiceName 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:107: clusterRoleVMLabels 80.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:119: newVirtualMachineService 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:132: createVMControlPlaneService 90.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:170: getVMControlPlaneService 87.5%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:188: getVMServiceVIP 83.3%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/control_plane_endpoint.go:206: getAPIEndpointFromVIP 88.9%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/resource_policy.go:37: ReconcileResourcePolicy 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/resource_policy.go:52: newVirtualMachineSetResourcePolicy 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/resource_policy.go:61: getVirtualMachineSetResourcePolicy 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/resource_policy.go:71: createVirtualMachineSetResourcePolicy 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:51: FetchVSphereMachine 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:58: FetchVSphereCluster 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:82: ReconcileDelete 51.6%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:137: SyncFailureReason 0.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:146: ReconcileNormal 84.3%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:253: newVMOperatorVM 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:266: reconcileVMOperatorVM 85.2%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:345: newBootstrapDataConfigMap 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:354: reconcileBootstrapDataConfigMap 83.3%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:390: getGuestInfoMetadata 66.7%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:415: reconcileNetwork 75.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:425: reconcileProviderID 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:440: getVirtualMachinesInCluster 87.5%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:463: addResourcePolicyAnnotations 77.8%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:480: volumeName 100.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:485: addVolume 80.0%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:504: addVolumes 92.9%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:546: getVMLabels 88.9%
sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/vmoperator/vmopmachine.go:573: getTopologyLabels 66.7%
What steps did you take and what happened: [A clear and concise description of what the bug is.]
What did you expect to happen:
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-provider-vsphere version:
- Kubernetes version: (use
kubectl version
): - OS (e.g. from
/etc/os-release
):
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.