sfdx-hardis
sfdx-hardis copied to clipboard
Possibility to split hardis config into several files
Current situation Config files names and variations are hardcoded and can be extended with additional branch- or user-named files. An external file can be loaded via "external" setting
Challenge When pre- and pos-deployment scripts are actively used, developers are permanently getting merge conflicts in the config file.
Solution Introduce a config setting of type Array. When provided, it reads local files from given array and merges them into main config. If included and the main configs contain overlapping keys of type Array - their items are merged, otherwise exception is thrown. Format can be:
- New setting as array of strings
imports:
- fiel1.yaml
- file2.yaml
- New setting inspired by symfony. Benefit of using object is possibility to extend them in future e.g. to controll overwriting, conflicts, strictnes, error handling, whatever else flags
imports:
- { resource: 'some_config.yaml' }
- { resource: 'another_config.yaml' }
- Tweaking the existing
externalsetting so that a. not only a string but also an array can be provided (using format from options 1 or 2) b. each string is interpreted as a local file (check if such file exists) or otherwise as a url as before
Benefit Several additional yaml files can be created to e.g.
- maintain post-deployment scripts or installed packages separately, keeping the own hardis config-flags list clean, short and readable
- long lists of post-deployment scripts can be split by team/domain/year to avoid merge conflicts and keep them shorter
@nvuillam what do you think about this idea in general? And if you like it, which format option would you prefer?
@step307 Do you have an example of pre-post commands that are so often updated ?
Note: I plan to add pre-post deployment scripts in the Pipeline Settings UI, so depending of your reply, the answer might more be to use another way to define them (one pre-post script by file ? add them directly in Pull Request descriptions ? ... ) than multiplying the config files, but i'm open to every idea once I understand the use case :)
@nvuillam we do not update post-deployment commands. But new ones are added often by different developers. Examples I'm having currently:
- A new field is added as replacement for other one, which is being deprecated. Data should be migrated from the old to the new one.
- Some user story reqires some records to be added/updated. This is mostly about "configurational" object records which are standard and used somehow similar to CustomMetadata. E.g. Workplan/step/whatever/templates.
- Possible steps to delete flows as they are now not easy deletable via destructiveChanges.
So in these scenarios the pore/post-deployment steps are not justr stable status-quo configuration, but ragther a tool for migrations of data/configurations/etc. related to user stories. As an additional side benefit of such migrations - they can be used in sandbox refresh services to bring empty sandboxes to a more prod-like state containing those required DB records.
I also got this alternative Idea with one file per script: New setting "folderPostDeploy" or just "postDeploymentMigrations", or whatever. The folder contains *.apex files, each of them can be executed wether as post-deployment script with
id: name/path of the file
command: sf apex run --file=[THE FILE].apex
skipIfError: true
runOnlyOnceByOrg: true
context: process-deployment-only
or via a completely new feature.
I would not use any pull request information because deployment to prod or other different environments can be more tricky, more manual and include more branches and PRs. So I would only use the repo itself as a source of truth.
Benefit:
- conflicts are completely excluded as each story creates a new file and does not require to change configuration
- data migrations are not configurations, so it is nice to split them Downside:
- only anon-apex scripts are possible, e.g. no hardis/sf cli commands to run this way
@step307 so if I summarize well:
- these post deployment commands are more related to individual user stories (Pull Requests) than integ / uat / preprod / main branches
- you would need to run them in each org only just after processing the deployment in the org
Did i understand the requirement ?
@nvuillam yes, your understanding is correct. I see requirement to run them after deployment, but I also can imagine some other teams or situations can wish also pre-deployment migrations.
I have ideas about that, to write them directly in the pull requests description , but it will take some time to implement :)
As this now hit several developers with conflicts they cannot resolve easily, I've made following:
- Developers only add new files e.g. under "scripts/post-deployment"
- A helper scripts picks up the lst of files and modifies the "config.sfdx-hardis.yml" before deployment
This way basically allows me as release manager to decide IF I want to run the scripts right now or not, in case i need to act manually.
@step307 I plan to allow to define pre-post commands directly in Pull Request descriptions before DevOps Dreamin on November 20 :)
Meanwhile, your workaround seems ok ^^
Note: I added an UI on pipeline settings for pre-post command ^^