cyberark-conjur-cli-docker-based icon indicating copy to clipboard operation
cyberark-conjur-cli-docker-based copied to clipboard

We understand the testing currently performed in the cyberark/conjur-cli repo

Open andytinkham opened this issue 6 years ago • 0 comments

Get a baseline understanding of what currently exists for testing code in the cyberark/conjur-cli repo to serve as a launchpad for identifying and subsequently closing gaps in our testing coverage and increase our quality.

Areas of interest include:

  • [ ] what is the current state of the existing tests? (Are they all passing? Are tests being skipped or commented out? Are any errors being displayed during test execution?)
  • [ ] how frequently are the tests run? Are they automatically triggered? Are all the tests run with the same frequency or are there different groupings of tests run on different schedules?
  • [ ] Code coverage of the existing tests & particularly, what areas do not currently have (or have little) coverage
  • [ ] Does the documentation in Testing Swarm for this repo need to be updated with new information?
  • [ ] Do the tests that exist actually test what they claim to test (as opposed to testing that mock object libraries function correctly)?
  • [ ] Can each test actually fail? What sets of circumstances would cause each test to fail? What are the necessary conditions to trigger those circumstance sets? In what situations would the test encounter an unexpected error condition? Can those occur?
  • [ ] What test scenarios are missing?
  • [ ] Look at historic bug issues in the repo. For each bug, is there at least one test that would fail if the bug reoccurred?
  • [ ] Are any components of the classes reusable elsewhere in other tests (maybe even in other repos)? With some work (refactoring, extracting out logic, etc.), could more components be reusable? If so, which?
  • [ ] Are there examples of particularly good test design or coding? Capture each in our testing patterns section of the conjurinc/docs repo
  • [ ] Does static analysis (linting, code climate, etc.) occur on the code base currently? How frequently? Are there currently issues raised by the tool(s)? Capture those issues.

Estimated Time: 2 weeks

andytinkham avatar Jul 19 '19 17:07 andytinkham