jsonptr icon indicating copy to clipboard operation
jsonptr copied to clipboard

Remove unnecessary tests

Open chanced opened this issue 1 year ago • 2 comments
trafficstars

Remove useless tests.

chanced avatar Jul 09 '24 00:07 chanced

Rather than remove, another option is to change them so they are more useful. I get why having high coverage is appealing, it has good marketing value, and open-source projects subsist on popularity. So rather than reduce coverage we can just increase test quality to achieve the same effect. Some working principles to keep in mind to achieve that:

  • We shouldn't enforce aspects of the current implementation that have no impact on users
    • This makes evolution more difficult, as contributors will be afraid to make changes that invalidate tests. And if tests need to be updated all the time, how do we know when they are failing due to an actual regression? An example is checking that source() returns a specific value. The Error trait very deliberately defines the return type as Option<&(dyn Error + 'static)>, which has the semantics that no promises are made about the source of the error, only that if one exists it also implements the Error trait, and that although you're borrowing it, you can hold on to it for as long as you'd like. By checking for a specific value we're constraining the implementation beyond what its contract promises.
  • Tests should be targeted
    • Some tests may seem to test for one aspect of the behavior when they actually test another. A good example:
     #[test]
     fn into_owned() {
         let token = Token::from_encoded("foo~0").unwrap().into_owned();
         assert_eq!(token.encoded(), "foo~0");
     }
    
    While this checks an important property of Token, it doesn't check the defining effect of calling into_owned, which is the change in the lifetime. This is also connected to the next point, as it gives us a false sense of security.
  • We should strive for tests to look for bugs that have a realistic chance of happening (i.e. tests should be high-signal)
    • This is mostly to avoid gaining a false sense of security. There is an implicit contract of trust that if a test exists (any test) for a piece of logic, and the test passes, we expect the logic to be correct. This makes changes more likely to receive less scrutiny during reviews, which makes regressions more likely to creep in. I don't think there's any super bad example of this currently, but one I can give that maybe falls under the category of "how likely is this to catch any bugs, really?" is:
          let err = InvalidEncodingError { offset: 3 };
          assert_eq!(err.offset(), 3);
    
    Setting aside the fact that maybe we shouldn't implement that method since the field is public anyway (and changing that is a breaking change), its implementation is so trivial we're very unlikely to screw it up so badly this test would catch it. A more interesting test would perhaps be a prop-test (using quickcheck or some other method) - after all what's so special about 3?

asmello avatar Jul 09 '24 18:07 asmello

Yea, I really need to clean these up.

chanced avatar Oct 21 '24 17:10 chanced