feat: migrated to changesets from lerna
Description
This removes Lerna and replaces it with Changesets for handling package versioning and publishing (releases).
How and where has this been tested?
Please tag yourself on the tests you've marked complete to confirm the tests have been run by someone other than the author.
Validation steps
- Please read the Intro to Using Changesets documentation.
- Please read the description of the Changesets GitHub bot.
- Please read the documentation on adding a changeset.
- Finally, this blog post is handy and gives more information about using Changesets.
- I added two commits (a change to Well, and a changeset) to demonstrate what this would look like when the GitHub bot detected a changeset. I'll drop those two commits before merging.
It's important to know that moving to Changesets shifts the onus of version determination toward a more intentional choice for contributors. Some could argue that the onus was already on the contributor as we used Conventional Commit messages to provide Lerna a way to infer the severity of a version increase. Now, instead of relying on Conventional Commit messages, the contributor will be asked to provide changesets as part of their PR process.
The workflow for this will look something like:
- The contributor makes their changes and commits them.
- Locally, the contributor runs
yarn changesetand is asked in the CLI to choose which package(s) should be part of the changeset. - After the package(s) have been selected, the CLI will ask if the version increment should be a
major,minor, orpatch. Hittingenterin the CLI without making a choice will skip options withpatchbeing the final option. - Next, the CLI will ask for a summary of the changes. This summary will be used in the
changelog.mdfor the respective package(s). We're also using the@changesets/changelog-githubpackage to provide additional GitHub-related context (pull request number + link, contributor information), and this info will show in thechangelog.mdfor the package(s), as well. - Stage the change, commit the change, and push to the remote branch.
Motivation and context
How has this been tested?
- [ ] Test case 1
- Go here
- Do this
- [ ] Test case 2
- Go here
- Do this
Screenshots (if appropriate)
Types of changes
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [x] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Chore (minor updates related to the tooling or maintenance of the repository, does not impact compiled assets)
Checklist
- [x] I have signed the Adobe Open Source CLA.
- [x] My code follows the code style of this project.
- [x] If my change required a change to the documentation, I have updated the documentation in this pull request.
- [x] I have read the CONTRIBUTING document.
- [x] I have added tests to cover my changes.
- [x] All new and existing tests passed.
- [x] I have reviewed at the Accessibility Practices for this feature, see: Aria Practices
Best practices
This repository uses conventional commit syntax for each commit message; note that the GitHub UI does not use this by default so be cautious when accepting suggested changes. Avoid the "Update branch" button on the pull request and opt instead for rebasing your branch against main.
Branch preview
Review the following VRT differences
When a visual regression test fails (or has previously failed while working on this branch), its results can be found in the following URLs:
- Spectrum | Light | Medium | LTR
- Spectrum | Dark | Large | RTL
- Express | Light | Medium | LTR
- Express | Dark | Large | RTL
- Spectrum-two | Light | Medium | LTR
- Spectrum-two | Dark | Large | RTL
- High Contrast Mode | Medium | LTR
If the changes are expected, update the current_golden_images_cache hash in the circleci config to accept the new images. Instructions are included in that file.
If the changes are unexpected, you can investigate the cause of the differences and update the code accordingly.
Tachometer results
Currently, no packages are changed by this PR...
Lighthouse scores
| Category | Latest (report) | Main (report) | Branch (report) |
|---|---|---|---|
| Performance | 0.99 | 0.98 | 0.98 |
| Accessibility | 1 | 1 | 1 |
| Best Practices | 1 | 1 | 1 |
| SEO | 1 | 0.92 | 0.92 |
| PWA | 1 | 1 | 1 |
What is this?
Lighthouse scores comparing the documentation site built from the PR ("Branch") to that of the production documentation site ("Latest") and the build currently on main ("Main"). Higher scores are better, but note that the SEO scores on Netlify URLs are artifically constrained to 0.92.
Transfer Size
| Category | Latest | Main | Branch |
|---|---|---|---|
| Total | 243.445 kB | 229.348 kB | 229.312 kB π |
| Scripts | 60.407 kB | 54.19 kB π | 54.233 kB |
| Stylesheet | 46.994 kB | 40.638 kB | 40.627 kB π |
| Document | 6.267 kB | 5.493 kB | 5.486 kB π |
| Font | 126.933 kB | 126.614 kB π | 126.615 kB |
Request Count
| Category | Latest | Main | Branch |
|---|---|---|---|
| Total | 52 | 52 | 52 |
| Scripts | 41 | 41 | 41 |
| Stylesheet | 5 | 5 | 5 |
| Document | 1 | 1 | 1 |
| Font | 2 | 2 | 2 |
I left one question here, @blunteshwar. An additional thought I had was if you need to account for the fixed version numbers that you publish at. Spectrum CSS doesn't use fixed versioning, so I didn't look into this, but you may need to!
Otherwise, the rest of the changes here look good. Are you going to be using the bot and the GitHub Action?
Yes, we are probably gonna use bot and the GitHub Action
As far as fixed versioning is considered, I am taking care of that in congig.json by leveraging"fixed": [["@spectrum-web-components/*"]],
@blunteshwar Once you resolve the changes requested by @castastrophe ,lets run a shallow release from this branch to make sure the build scripts are working with changesets! @castastrophe Do you feel is there another parallel way to mock test this change?
@blunteshwar Once you resolve the changes requested by @castastrophe ,lets run a shallow release from this branch to make sure the build scripts are working with changesets! @castastrophe Do you feel is there another parallel way to mock test this change?
No I think that approach makes the most sense.
Pull Request Test Coverage Report for Build 13672721201
Details
- 0 of 0 changed or added relevant lines in 0 files are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage remained the same at 97.966%
| Totals | |
|---|---|
| Change from base Build 13660087416: | 0.0% |
| Covered Lines: | 33662 |
| Relevant Lines: | 34164 |
π - Coveralls
β οΈ No Changeset found
Latest commit: d2a11aa3987998db88443dab85735859636778ff
Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.
This PR includes no changesets
When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types
Click here to learn what changesets are, and how to add one.
Click here if you're a maintainer who wants to add a changeset to this PR
@blunteshwar can we add some jsdoc descriptions at the top of the task files to explain what the task is used for and what the output is at the top?
Also, sorry for being late to the review, but can we still separate ignored packages and the getWorkspacePackages function to their own file? Perhaps in a utility folder inside tasks? I think of this more like a utility pattern: For the ignored packages include only the ones used in all tasks. Then we can import and extend that to include the one-off packages. Then we pass the extendedIgnoredPackages to the function Include a note as to why that one-off package is necessary to skip in that specific task so we understand the difference
Also, sorry for being late to the review, but can we still separate ignored packages and the getWorkspacePackages function to their own file? Perhaps in a utility folder inside tasks? I think of this more like a utility pattern: For the ignored packages include only the ones used in all tasks. Then we can import and extend that to include the one-off packages. Then we pass the extendedIgnoredPackages to the function Include a note as to why that one-off package is necessary to skip in that specific task so we understand the difference
Thank you for the review and the suggestion! I appreciate the intent behind separating common ignored packages and the getWorkspacePackages function into their own utility file. However, I believe the current approach offers more clarity and aligns better with the needs of our workflow. Hereβs why:
-
Task-Specific Context: Each script has unique requirements for its ignored packages due to its specific purpose and scope. By defining the ignored packages directly in each script, we keep the logic self-contained, making it easier for future developers to understand why certain packages are excluded without having to cross-reference multiple files. This ensures that the context and reasoning are immediately visible.
-
Flexibility for One-Off Packages: Many of the ignored packages differ between tasks, and the one-off packages often require specific reasoning or comments to justify their exclusion. Including all common and one-off packages in a utility file would reduce flexibility and may result in duplication when extending the common ignored packages for individual scripts.
-
Keeping everything task-specific reduces the risk of accidental changes to shared configurations that could impact multiple tasks. This ensures that any updates or fixes are isolated to the relevant script without unintended side effects.
Cool! Thanks for doing this. Let's dry run this after 27th Jan and then we can stage this for the next release.
Related Jira ticket: SWC-643