feat: Add tests for - `argocd admin import` command
Fixes #21895
This PR adds unit test for argocd admin import command.
Checklist:
- [x] Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
- [x] The title of the PR states what changed and the related issues number (used for the release note).
- [x] The title of the PR conforms to the Toolchain Guide
- [x] I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
- [ ] I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
- [ ] Does this PR require documentation updates?
- [ ] I've updated documentation as required by this PR.
- [x] I have signed off all my commits as required by DCO
- [x] I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
- [x] My build is green (troubleshooting builds).
- [ ] My new feature complies with the feature status guidelines.
- [x] I have added a brief description of why this PR is necessary and/or what this PR solves.
- [ ] Optional. My organization is added to USERS.md.
- [ ] Optional. For bug fixes, I've indicated what older releases this fix should be cherry-picked into (this may or may not happen depending on risk/complexity).
:x: Preview Environment undeployed from Bunnyshell
Available commands (reply to this comment):
- :rocket:
/bns:deployto deploy the environment
Codecov Report
:x: Patch coverage is 27.75330% with 164 lines in your changes missing coverage. Please review.
:warning: Please upload report for BASE (master@4a71661). Learn more about missing BASE report.
| Files with missing lines | Patch % | Lines |
|---|---|---|
| cmd/argocd/commands/admin/backup.go | 27.75% | 157 Missing and 7 partials :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #22780 +/- ##
=========================================
Coverage ? 60.87%
=========================================
Files ? 351
Lines ? 60548
Branches ? 0
=========================================
Hits ? 36858
Misses ? 20762
Partials ? 2928
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@nitishfy , added 5 test cases π All those are passing.
I'd suggest learning about these things one at a time. For eg. What is the use of import command, why do we use the prune flag, and what is the application namespace feature? Pick one test case at a time. Try to understand what this feature means, look at the code to see how it is written and then write a test case to validate it. You may be required to even put that logic in a new function and call that function in the test case to validate the scenario.
You're required to add all the test cases that checks these scenarios:
- Resource should be pruned when --prune and missing from backup
- Resource should be created when missing from live
- Resource tracking annotation should not be updated when --ignore-tracking
- Application operation should removed when --stop-operation
- Update operation should retry on conflict when --override-on-conflict
- Resource should not be pruned if present in backup and backup resource matches --skip-resources-with-label
- Resource should not be pruned if missing from backup, but live matches --skip-resources-with-label
- Test application-namespace feature
- Test applicationset-namespace feature
- Dry run should not modify any resources
Yes, sure, will add with the flags as well π Actually, first I just wanted to confirm if the tests which I'm creating must be created like this only or should I should do some changes in it. So, that's why, I didn't add the flags yet. NVM, will push a commit for this π Thanks for your suggestions!
I'd suggest learning about these things one at a time. For eg. What is the use of import command, why do we use the prune flag, and what is the application namespace feature? Pick one test case at a time. Try to understand what this feature means, look at the code to see how it is written and then write a test case to validate it. You may be required to even put that logic in a new function and call that function in the test case to validate the scenario.
Yes, I've already read the entire input command code, and now I've a good understanding of the same.
(Previously, I was having some issues in understanding this, but now I'm quite familiar). I'm checking how can I add the tests for the flags, like - --prune, --stop-operation etc...
I'll also try to put that logic in a separate func (as suggested) to test the flags and all for testing in the Test_importResource π
Hey @nitishfy , could you kindly have a look at the tests for the --prune flag.
The tests are passing, should I add any more TCs if required?
Yes @agaudreault , actually I forgot to update the names of the test cases. Pushing the changes for the same π
argocd admin import command unit test flag tracker:
-
Checked flags:
- Resource should be pruned when --prune and missing from backup
- Resource should be created when missing from live
- Resource should not be pruned if present in backup and backup resource matches --skip-resources-with-label
- Resource should not be pruned if missing from backup, but live matches --skip-resources-with-label
- Test application-namespace feature
- Test applicationset-namespace feature
- Application operation should removed when --stop-operation
- Update operation should retry on conflict when --override-on-conflict
-
Unchecked flags:
- Resource tracking annotation should not be updated when --ignore-tracking
- Dry run should not modify any resources
This PR is in my list to review - I'll review this today.
Sure, thanks @nitishfy π
- Also, I'm making the required changes for the
--override-on-conflictflag, will try to push it by today itself. - I was checking the
--ignore-trackingflag, ig the tests for this are already present, so ig this can be skipped? - And as for the
--dry-runflag, while I was checking theNewImportCommand()func, --dry-run simply logs what's being performed, and AFAIK, it just includes fmt.println statements. So, anyways, it won't affect the resources. I think, the best way to test this is by adding println statement, right? and then it should check that nothing is being changed. But, my concern is that, a print statement won't change anything in the resources, so do we really need the test for this flag?
Pushed the tests for --override-on-conflict flag π
kindly review, and LMK if any changes are required :)
@nitishfy , just 1 test is being run, once passed, I'll mark it as ready π
PR is ready for review @nitishfy , all checks are passing now :)
@nitishfy , should I refactor the NewImportCommand() func, in backup.go? And create multiple functions and then try to call it inside the backup_test.go?
should I create a separate PR for the refactor?
should I create a separate PR for the refactor?
Not necessary, please do it in the same PR. Otherwise we'll have to wait for another PR to get merged before jumping on it.
@nitishfy , should I refactor the NewImportCommand() func, in
backup.go? And create multiple functions and then try to call it inside thebackup_test.go?
Yes, that's what I meant. Refactor the code to break into functions and call those functions in the test.
Got it, I'll try to make the changes most probably before tomorrow's call. Also, just wanted to ask, that after I pulled some changes, 5 checks are failing. I don't think I made any changes in the codebase, the tests are all passing. Should I try to check what's the issue, or maybe we can handle it later?
Got it, I'll try to make the changes most probably before tomorrow's call. Also, just wanted to ask, that after I pulled some changes, 5 checks are failing. I don't think I made any changes in the codebase, the tests are all passing. Should I try to check what's the issue, or maybe we can handle it later?
This is happening because of the deletion of gcr.io/heptio-images/ks-guestbook-demo image that is required for e2e tests. It's not something that happened because of your changes. The master branch e2e tests are failing instead because the image has been deleted. Look here: https://cloud-native.slack.com/archives/C020XM04CUW/p1747231505846999
@nitishfy , I've pushed the tests for createPruneObject func, I'll be pushing for setDynamicClient very soon.
Also, I'm checking that how can we break the NewImportCommand func more, so that those functions can be used for testing. Is it required to break it down more, or this is fine? Ig, I should create some more funcs, right?
Hi @nitishfy , apologies for the delay, I've pushed a commit for the setDynamicClient() func, kindly have a look and LMK if any changes are required.
Thanks :)
Hi @nitishfy , just a friendly follow up regarding this PR :)
@agaudreault , a friendly follow up on this PR :)
Hi @nitishfy @agaudreault , wanted to update that I didn't get time to work on this PR. I'll push the changes in some time (most probably by this Saturday or Sunday).
Hi @nitishfy @agaudreault , I've pushed a commit according to the suggestions.
Since, I didn't have any reviews how to proceed, I tried to refactor the Run func and I broke it down into executeImport() func and then I passed it to the Test_executeImport func where I unit tested it.
Kindly have a look at it, if I'm going in the correct direction?
Thanks
Hi @nitishfy @agaudreault , I've pushed a commit in which I've refactored the NewImportCommand() by adding the required executeImport() func, as mentioned in this comment.
Could you kindly have a look that is this the desired behaviour which you are expecting? I've asked for review multiple times on this. Kindly have a look at this. Thanks
Can you fix the failing tests?
Sure, I'll push the commits today to fix the testsπ
Hi @nitishfy , all checks are passing now. Previously there was 1 failing check, but when I pulled the latest changes, it's fixed π Kindly have a look
Thanks :)