Per-field `GenericPackageDescription` golden test
We are making changes to the internal representation of GenericPackageDescription in #11277 in order to retain common stanzas. Our current approach with the common stanza retention is done by adding a Map ImportName (CondTree ConfVar [Dependency] BuildInfo) into GenericPackageDescription, and merging on demand with PatternSynonyms accessors.
The current golden test test suite doesn't handle this change well — we have one single golden file per GenericPackageDescription data. Adding fields to this data type creates huge amount of diffs, making bugs harder to spot for both me and the reviewers.
To get out of this diff hell, we have split the golden tests of GenericPackageDescription to write each GenericPackageDescription field into its own file. This way, we can quickly identify which behaviour has changed and for which file. New fields will simply be in new files.
This is a separate PR because the diff is very big, which is caused by the golden test files changes. After merging this, #11277's diff would roughly drop from 24kloc to 1.8kloc. This would make the actual behaviour changes more reviewable.
This PR does not modify behaviour or interface
E.g. the PR only touches documentation or tests, does refactorings, etc.
Include the following checklist in your PR:
- [ ] Patches conform to the coding conventions.
- [ ] Is this a PR that fixes CI? If so, it will need to be backported to older cabal release branches (ask maintainers for directions).
To get out of this diff hell, we have split the golden tests of
GenericPackageDescriptionto write eachGenericPackageDescriptionfield into its own file. This way, we can quickly identify which behaviour has changed and for which file. New fields will simply be in new files.
I'm not quite convinced about this. Having an entire GPD in a single file is much more convenient than scattered across a dozen of them. If you wish to future-proof the test suite against future changes to GPD fields, you can serialise a tuple of (packageDescription, gpdScannedVersion, genPackageFlags, condLibrary, condSubLibraries, condForeignLibs, condExecutables, condTestSuites, condBenchmarks) but still in the same file. What do you think?
@Bodigrim That is true, thanks for the comment :). I just did that for this branch and for #11277, which now has a diff of 2.5kloc now. Does this PR still have its place or should I close this one and focus on that one instead knowing the diff is significantly smaller?
I think it's fine to change this in a separate branch to keep other PRs smaller.
I see, thanks for the review :)
As Artem mentioned in the comment parser PR, if it's useful to rebase and reduce the test related commit to only one, let me know and I'll do it :+1:
Rebase and preferably squash please, yes.
@leana8959 We can land this shortly but it would be good to address my two comments to future-proof this.
I end up changing the ToExpr instance of GenericPackageDescription with a pattern match and named fields. What do you think about this?
I squashed everything into one commit so this should be ready for review! The CI failure seems to be caused by an unrelated curl failure.
@jappeace, we have the reviews, we just need someone to mark the earlier comments as resolved if they are.
Just resolved the earlier comments! Sorry that took a while, I thought you wanted someone from the review side to resolve them.
In general we expect that you resolve them if you believe you have addressed them, and request further reviews if you think it might be needed. (I personally often re-request a review from the person who made the review comment.)
One reason for this is that GitHub's UI can otherwise "lose" review comments: a change that removes the line commented on will make the review comment inaccessible, but GitHub will still consider the comment unresolved. In this case you need to (possibly get someone with access to) dismiss the review comment.
Also I see two review comments from @Bodigrim that are still open.
I see, that makes sense. Also oops I didn't scroll up far enough, I resolved everything now.
I disagree with @geekosaur on how comment resolution should work. I prefer it when comments left by me are resolved by me as well. I guess we should decide what we want to do in the Cabal project and write it down in CONTRIBUTING.md.
I saw a question from Matt that could use an explicit answer, not just code update: https://github.com/haskell/cabal/pull/11285#discussion_r2556322214
I disagree with @geekosaur on how comment resolution should work. I prefer it when comments left by me are resolved by me as well. I guess we should decide what we want to do in the Cabal project and write it down in CONTRIBUTING.md.
It would be most helpful to write it down indeed. My recent attempt to demistify the contribution process in https://github.com/haskell/cabal/pull/11232#issuecomment-3517806269 left me absolutely baffled (and, more specifically, under an impression that a PR author is free to resolve comments themselves).
A great PR, thank you!
I think, as of now, both ways of resolving comments are welcome in cabal PRs. I agree it would be valuable to suggest just one of them in CONTRIBUTING.md, though I have no illusion this is going to be followed strictly, so we may still have deadlocks or race conditions, but we can resolve them easier by pointing to CONTRIBUTING.md. (Personally I favour the "all power to the PR author" convention, but not strongly.)
I see the conversations are resolved. @mpickering: are you satisfied with the answers to your comments?
@leana8959: I've heard you are considering revisiting the design of the extract print feature in the light of your implementation experiments to date and the new interest from the cabal community. That's great news! However, given that this precious PR only adds tests, which is again very much appreciated, shall we just merge it right now, not waiting for any of the other efforts you spearhead? If so, please kindly set the merge_me label and it should auto-merge shortly. Thank you again!
this precious PR only adds tests
It doesn't seem to add any new tests?..