cluster-api-provider-vsphere icon indicating copy to clipboard operation
cluster-api-provider-vsphere copied to clipboard

Investigate why the description of all the tests are not logged

Open geetikabatra opened this issue 2 years ago • 8 comments

/kind bug

Currently, there is a lot of unevenness in logging. Some unit tests show descriptions of the tests, while others go completely unnoticed.


Please refer to the following.
=== RUN   TestIdentity
Running Suite: Identity Suite
=============================
Random Seed: 1648482245
Will run 9 of 9 specs.

•••••••••
--- PASS: TestIdentity (9.39s)
PASS
ok  	sigs.k8s.io/cluster-api-provider-vsphere/pkg/identity	12.629s
=== RUN   TestOptions_GetCredentials
=== RUN   TestOptions_GetCredentials/username_&_password_with_no_special_characters
=== RUN   TestOptions_GetCredentials/username_with_UPN_
--- PASS: TestOptions_GetCredentials (0.00s)
    --- PASS: TestOptions_GetCredentials/username_&_password_with_no_special_characters (0.00s)
    --- PASS: TestOptions_GetCredentials/username_with_UPN_ (0.00s)
PASS
ok  	sigs.k8s.io/cluster-api-provider-vsphere/pkg/manager	1.333s

There is a lot of ambiguity in finding out which one is the pkg/Identity test. The first example logs that it ran nine specs without any description of what test ran, and the second example logs TestIdentity even though it is not exactly a test of pkg/identity; it is a test to check the functionality of pkg/manager

What steps did you take and what happened:

Above can be observed in logs after running the make test

What did you expect to happen:

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] Further investigation is required to understand why this behaviour happens and take measures to rectify it. This is a follow up of the discussion happened in CAPV office hours. Environment:

  • Cluster-api-provider-vsphere version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

geetikabatra avatar Mar 28 '22 16:03 geetikabatra

/assign @ditsuke

geetikabatra avatar Mar 28 '22 16:03 geetikabatra

@geetikabatra: GitHub didn't allow me to assign the following users: ditsuke.

Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

/assign @ditsuke

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 28 '22 16:03 k8s-ci-robot

@ditsuke Can you comment on this one, so that bot assigns it to you.

geetikabatra avatar Mar 28 '22 16:03 geetikabatra

/assign

ditsuke avatar Mar 28 '22 16:03 ditsuke

@geetikabatra After some investigation it appears these inconsistencies are due to our mixed usage of vanilla Go testing and ginkgo tests. As an example:

  • go test -v ./... on my PR for the metadata package (uses ginkgo):

    Visual

    image

    == RUN   TestMetadata
    unning Suite: Metadata Suite
    ============================
    andom Seed: 1648739658
    ill run 4 of 4 specs
    
    022/03/31 20:44:20 tag not found: testTag
    +++
    an 4 of 4 Specs in 1.898 seconds
    UCCESS! -- 4 Passed | 0 Failed | 0 Pending | 0 Skipped
    -- PASS: TestMetadata (1.90s)
    ASS
    k      sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/metadata  2.020s
    
  • go test -v ./... on pkg/services/govmomi/net (vanilla Go testing):

    Visual

    image

    == RUN   TestErrOnLocalOnlyIPAddr
    == RUN   TestErrOnLocalOnlyIPAddr/valid-ipv4
    == RUN   TestErrOnLocalOnlyIPAddr/valid-ipv6
    == RUN   TestErrOnLocalOnlyIPAddr/localhost
    net_test.go:71: failed to validate ip addr=127.0.0.1: loopback
    == RUN   TestErrOnLocalOnlyIPAddr/link-local-unicast-ipv4
    net_test.go:71: failed to validate ip addr=169.254.2.3: link-local-unicast
    == RUN   TestErrOnLocalOnlyIPAddr/link-local-unicast-ipv6
    net_test.go:71: failed to validate ip addr=fe80::250:56ff:feb0:345d: link-local-unicast
    == RUN   TestErrOnLocalOnlyIPAddr/link-local-multicast-ipv4
    net_test.go:71: failed to validate ip addr=224.0.0.252: link-local-mutlicast
    == RUN   TestErrOnLocalOnlyIPAddr/link-local-multicast-ipv6
    net_test.go:71: failed to validate ip addr=FF02:0:0:0:0:0:1:3: link-local-mutlicast
    -- PASS: TestErrOnLocalOnlyIPAddr (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/valid-ipv4 (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/valid-ipv6 (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/localhost (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/link-local-unicast-ipv4 (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/link-local-unicast-ipv6 (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/link-local-multicast-ipv4 (0.00s)
    --- PASS: TestErrOnLocalOnlyIPAddr/link-local-multicast-ipv6 (0.00s)
    ASS
    k      sigs.k8s.io/cluster-api-provider-vsphere/pkg/services/govmomi/net       0.061s
    

ditsuke avatar Mar 31 '22 15:03 ditsuke

I believe we should be using any one of those. This topic requires a broader discussion. From the visuals above, I am more in favour of vanilla go testing. Since it provides a detailed output. @ditsuke Could you also investigate how can we have more information if we use ginkgo. Try looking into other providers as well as core cluster api.

geetikabatra avatar Apr 06 '22 07:04 geetikabatra

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 05 '22 08:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 04 '22 09:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 03 '22 10:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 03 '22 10:09 k8s-ci-robot