feat: migrate test suite from Mocha to Vitest
Summary
Completes a comprehensive migration of the entire test suite from Mocha to Vitest, modernizing our testing infrastructure across both the language server and svelte2tsx packages.
Changes Made
Core Testing Infrastructure
- Replaced Mocha with Vitest: Migrated all test files from Mocha to Vitest
- Snapshot Testing: Converted JSON fixture files to Vitest's native snapshot system, eliminating individual JSON files
Multi-Version Svelte Support
- Dual Testing Setup: Maintained compatibility testing for both Svelte 4 and Svelte 5 using Vitest's project configuration. I attempted to set up vitest projects with an alias for svelte, but language-tools ignored it. Needs more research.
Test Modernization
- Helper Utilities: Created shared test helpers in
test-helpers.tsfor common setup patterns - Async Handling: Improved async test patterns using Vitest's native promises support
Compromises & Technical Decisions
- Sequential Performance Tests: Some performance-sensitive tests marked as
it.sequential()to prevent timing conflicts in CI - Snapshot Migration: Moved to Vitest snapshots from individual JSON expectation files for better maintainability
- Dependency Updates: Removed cross-env and Mocha-related dependencies, added Vitest
Testing & Validation
- ✅ All tests pass under Vitest with (I believe) identical behavior to Mocha
- ✅ CI pipeline validates both Svelte 4 and Svelte 5 compatibility
- ✅ Snapshot tests ensure consistent diagnostic and completion behavior
- ✅ Performance tests maintain timing accuracy with sequential execution
Continuing the work done in https://github.com/sveltejs/language-tools/pull/2711
Most of the changes come from the conversion of snapshots to vitest snapshots.
I tooled around with trying to get svelte 4 and svelte 5 test to run simultaneously using an alias and vitest projects, but language-tools didn't seem to want to play nice with the alias. I can spend more time digging into that if desired.
Happy to clean up/make changes as necessary, just let me know what I can do to help get this across the finish line.
I don't think merging every expected JSON snapshot into one giant file is a good idea. It makes it very hard to trace the git diff, difficult to map the file back to its original input, and it'll easily cause git conflicts. I am also not sure about the "Dual Testing Setup" part, at least that the svelte5 install should be in devDependencies. The test timeout customisation shouldn't be removed either.
I don't think merging every expected JSON snapshot into one giant file is a good idea.
Ok, I can organize these into files that (mostly) align with the files we had before using toMatchFileSnapshot.
I am also not sure about the "Dual Testing Setup" part, at least that the svelte5 install should be in devDependencies.
This is an artifact from when I was trying to make both versions test in the same CI and context using vitest projects. I still think this can be made to work, but was having trouble with it. I'll remove the artifacts, my bad.
The test timeout customisation shouldn't be removed either.
Will fix this as well and take another pass on unnecessary changes that slipped through.
The test timeout customisation shouldn't be removed either.
Will fix this as well and take another pass on unnecessary changes that slipped through.
I wasn't able to exactly reproduce the same behavior as I couldn't find a way to dynamically set and get timeout in the test like mocha (if you're aware of one let me know), but I did my best to recreate the same regression proofing with expect. Let me know if you'd like to see another approach here.
The other issues have been addressed, I believe.
Edit: benchmark output is kinda odd in vite right now, I wrote some benchmarks but need to spend more time to understand how to make the output useful. will have to follow up with benchmarks.
Thank you for continuing this effort! As for the snapshot changes, could you just revert to our custom setup for that part? I believe that setup is independent of the test runner. Would keep the diff smaller, and we can always revisit later if we want to (I agree with @jasonlyu123 that having giant files or snapshots in separate folders isn't ideal)
Thank you for continuing this effort! As for the snapshot changes, could you just revert to our custom setup for that part? I believe that setup is independent of the test runner. Would keep the diff smaller, and we can always revisit later if we want to (I agree with @jasonlyu123 that having giant files or snapshots in separate folders isn't ideal)
Can do. I thought I'd followed up on that feedback to use vitest snapshots to break them up into similar scoped filenames but for the sake of a smaller diff I will revert to the existing snapshot solution later today when I have a chance.
Thank you for continuing this effort! As for the snapshot changes, could you just revert to our custom setup for that part? I believe that setup is independent of the test runner. Would keep the diff smaller, and we can always revisit later if we want to (I agree with @jasonlyu123 that having giant files or snapshots in separate folders isn't ideal)
FWIW I did switch to individual file-based vitest snapshots in https://github.com/sveltejs/language-tools/pull/2815/commits/d7d8e1c38c1fd9b3961a23ed8c6005f0dc61abac
I am about to start refactoring to target the previous snapshot solution, but wanted to point that out in case the vitest snapshots would be acceptable -- it is a large diff here, but we would expect future diffs to be useful and readable given this approach and we remove some of the noise around the "expectedv2.json" filenames this way.
Very sorry about the noise around CI on this one. I made a bit of a mess writing scripts to help me test and generate for svelte 4 vs svelte 5. Going to review this again but I believe I have fully restored the previous snapshot functionality.
Thank you for continuing this effort! As for the snapshot changes, could you just revert to our custom setup for that part? I believe that setup is independent of the test runner. Would keep the diff smaller, and we can always revisit later if we want to (I agree with @jasonlyu123 that having giant files or snapshots in separate folders isn't ideal)
I think I have thoroughly addressed this and reverted the snapshots. Some of the remaining changes do appear to be cosmetic/formatting and maybe technically unnecessary, but from what I can tell this is expected to happen iteratively based on current prettier standards and if it can be avoided maybe we can accept the PR as-is. If my assessment is flawed and I need to revisit snapshots lmk and I will do that.
There's a lot of changes in the expected_... files, which makes me worried - why did a pure change in the test setup cause these changes? That feels wrong to me.
I have a bit more time to contribute over the holidays so I will be dusting this off and figuring out where the "expected_" changes were coming in, updating tests to align with changes. If there's any other feedback you have on this one let me know.