nvim-metals
nvim-metals copied to clipboard
Implement something similar to the VS Code Testing API
Describe the feature
Recently in Metals we added support for test discover so that we could support the VS Code Testing API. This unlocks a much easier experience for users to have more fine grained control over running specific suites or tests.
Potential ways to implement
It'd be really neat to explore implementing this. There seems to be multiple different ways that we could go about this. One would be trying to mimic the look of the explorer and relying on a UI to visualize all your suites and then to "click" on the one you want to run. I would imagine this looking somewhat similar to how packer does their side window. Even how they use ext marks to update the status of the dependency being installed could be mimicked here to update the state of a test failing or succeeding. Most of the work is actually being done via DAP for this, so the side panel would just be the glue basically.
Another approach to this could be less UI driven, but we could still use the discover-test-suites
command to get a list of tests back and the user could then use a picker to just choose the suite or package that they'd want to run. This option would be easier, but would maybe be harder to visualize the results of the tests, etc.
To sort of outline the flow below that would be needed no matter if we were using the first or second approach:
- First you need to ensure that
initiatlizationOptions.testExplorerProvider
is set to true during initialize - Also add in the
test-user-interface
to the user configuration so they can turn this off if they want and instead default back to code lenses. - Then the process starts off by a
workspace/executeCommand
being sent to metals withmetals.discover-test-suites
as the command
The trace for the above looks like this
[Trace - 07:30:35 PM] Received request 'workspace/executeCommand - (10)'
Params: {
"command": "metals.discover-test-suites"
}
[Trace - 07:30:35 PM] Sending response 'workspace/executeCommand - (10)'. Processing request took 4ms
Result: [
{
"targetName": "Sanity.test",
"targetUri": "file:/Users/ckipp/Documents/scala-workspace/sanity/Sanity/test/?id\u003dSanity.test",
"discovered": [
{
"prefix": "example",
"children": [
{
"fullyQualifiedName": "example.HelloSuite",
"className": "HelloSuite",
"location": {
"uri": "file:///Users/ckipp/Documents/scala-workspace/sanity/Sanity/test/src/example/HelloSuite.scala",
"range": {
"start": {
"line": 2,
"character": 6
},
"end": {
"line": 2,
"character": 16
}
}
},
"kind": "suite"
}
],
"kind": "package"
}
]
}
]
- Then when the user would select a specific test it again sends a command back to the server to which Metals responds with the information that we would need for
nvim-dap
[Trace - 09:01:41 AM] Received request 'workspace/executeCommand - (9)'
Params: {
"command": "debug-adapter-start",
"arguments": [
{
"targets": [
{
"uri": "file:/Users/ckipp/Documents/scala-workspace/sanity/Sanity/test/?id\u003dSanity.test"
}
],
"dataKind": "scala-test-suites",
"data": [
"example.HelloSuite"
]
}
]
}
[Trace - 09:01:41 AM] Sending response 'workspace/executeCommand - (9)'. Processing request took 94ms
Result: {
"name": "example.HelloSuite",
"uri": "tcp://127.0.0.1:52270"
}
- At this point we have everything we need to hand off to
nvim-dap
. There would be some communication between the dap server and client and then what we'd want to capture at the end is thetestResult
. - In order to capture that test result we'll need to add an entry into the
nvim-dap
listeners table for thetestResult
.
The testResult
trace looks like this
[Trace][10:33:29 AM] Sent notification:
{
"type": "event",
"seq": 24,
"event": "testResult",
"body": {
"category": "testResult",
"data": {
"suiteName": "example.HelloSuite",
"duration": 103,
"tests": [
{
"kind": "failed",
"testName": "example.HelloSuite.numbers",
"duration": 103,
"error": "* example.HelloSuite.numbers - munit.ComparisonFailException: /Users/ckipp/Documents/scala-workspace/sanity/Sanity/test/src/example/HelloSuite.scala:7\n6: val expected \u003d 43\n\u001b[7m7: assertEquals(obtained, expected)\u001b[0m\n8: }\nvalues are not the same\n\u001b[1m\u003d\u003e Obtained\u001b[0m\n42\n\u001b[1m\u003d\u003e Diff\u001b[0m (\u001b[91m- obtained\u001b[0m, \u001b[92m+ expected\u001b[0m)\n\u001b[91m-42\u001b[0m\n\u001b[92m+43\u001b[0m"
},
{
"kind": "passed",
"testName": "example.HelloSuite.hello",
"duration": 103
}
]
},
"type": "testResult"
}
}
The entire flow diagramed out will more or less be this:
┌───────────┐ ┌──────┐ ┌────────┐ ┌──────────┐
│nvim-metals│ │Metals│ │nvim-dap│ │dap-server│
└─────┬─────┘ └──┬───┘ └───┬────┘ └────┬─────┘
│workspace/executeCommand (metals.discover-test-suites)│ │ │
│──────────────────────────────────────────────────────> │ │
│ │ │ │
│ workspace/executeCommand (response, test suites) │ │ │
│<────────────────────────────────────────────────────── │ │
│ │ │ │
│ workspace/executeCommand (debug-adapter-start) │ │ │
│──────────────────────────────────────────────────────> │ │
│ │ │ │
│ workspace/executeCommand (response, name and uri) │ │ │
│<────────────────────────────────────────────────────── │ │
│ │ │ │
│ dap.start() │ │ │
│─────────────────────────────────────────────────────────────────────────> │
│ │ │ │
│ │ │ │
│ │ │ ───────────────────>
│ │ │ │
│ │ │ │
│ │ │ <───────────────────
│ │ │ │
│ forward the testResult event │ │ │
│<───────────────────────────────────────────────────────────────────────── │
┌─────┴─────┐ ┌──┴───┐ ┌───┴────┐ ┌────┴─────┐
│nvim-metals│ │Metals│ │nvim-dap│ │dap-server│
└───────────┘ └──────┘ └────────┘ └──────────┘
There are also some other plugins that we can look at for inspiration that are listed in mfussenegger/nvim-dap#414. For now it's probably easiest to just try and implement this straight in nvim-metals
to see what all is needed, and then possibly strip it out if it can be generalized enough that it's usable for other as well.
I don't know if this is the appropriate issue to comment but it would be nice if we grep test cases (especially with telescope???). And another thing I miss from using IntelliJ is the ability to rerun the last test.
I don't know if this is the appropriate issue to comment but it would be nice if we grep test cases (especially with telescope???).
Huh, this is actually a great idea. Up until this point I've been sort of fixated on trying to mimic the way the testing API works in a tree, but I guess there is no reason to actually do that if we don't want. Being able to filter through your tests in telescope is actually totally doable. If i'm not mistaken Metals has everything we'd need to do this already, we'd just need to add some plumbing. I may play around with this. Thanks for the idea @lenguyenthanh!
I don't know if this is the appropriate issue to comment but it would be nice if we grep test cases (especially with telescope???). And another thing I miss from using IntelliJ is the ability to rerun the last test.
I'm not fully sure what your workflow looks like, but this is actually already possible with DAP. For example let's say you just ran FooSpec.scala
and then you make some changes, you can then do a require("dap").run_last()
which will run the last configuration and run FooSpec.scala
again.
I'm not fully sure what your workflow looks like, but this is actually already possible with DAP. For example let's say you just ran FooSpec.scala and then you make some changes, you can then do a require("dap").run_last() which will run the last configuration and run FooSpec.scala again.
Thanks @ckipp01, I haven't use DAP yet. I'll try it soon.
Hey @ckipp01!
I'm taking a look into metals-update-test-explorer
client command since neither metals.discover-test-suites
nor metals.discover-tests
server commands seem to work for me.
I see that metals sends me the data about test suites and then about all test cases located in that suites. So it is smart enough to find all the test cases 🚀 But I can not find a way to tell metals to run only a specific test case not the whole class. Do you know if metals support running only specific test case?
But I can not find a way to tell metals to run only a specific test case not the whole class. Do you know if metals support running only specific test case?
Hey @antosha417! So the short answer is yes, Metals can tell you about individual tests and which ones can be ran. The tricky part is that it's sort of modelled off how the debug/test API of VS Code works. So it needs to be fully built from scratch to implement in Metals. All the commands you mention up above can be used for this in various ways. That's what originally caused me to create this issue, although the graph I created up above is a bit out of date now. If you dig through https://scalameta.org/metals/docs/integrations/new-editor you can see a few examples and some minimal descriptions. I know there are some full graphs showing the flow somewhere, but the might be buried in the PR descriptions/conversations.
I'm not 100% sure how to proceed tbh. I originally wanted to do a tree-like thing (I even started on it), but I know that I won't personally be happy with it since I don't really use tree views. I think for myself if I was to work on this again I'd probably explore using those commands to get the tests for a specific URI, and then using extmarks to mark the lines that hold the tests and then trigger them based off that, or code lenses. Something that wouldn't require a tree. However I haven't looked into this in a while.
Dropping https://github.com/nvim-neotest/neotest in here which I just discovered. This might work nicely. When I get some time I plan on digging into what an adapter could look like for nvim-metals.
neotest
uses tree-sitter
to discover tests though, isn't that a completely different strategy to what you were aiming for?
https://github.com/nvim-neotest/neotest
Ahhh, dang, that's a bummer actually. I read too quickly and when I looked at the interface needed I actually thought it was generic
function NeotestAdapter.discover_positions(file_path) end
I didn't look as close as I should have at the @return
. I was this was generic and that you could just use whatever to spit out a list of positions. With the DAP integration we already have, we can get this list of all the positions of tests, so I thought we could use that to get these inside of discover_positions
, but that seems to not be the case 🤔
Maybe it is generic 🤔 the plugins I looked at so far are tree-sitter based though and most of the helpers are for tree-sitter. I guess you can do whatever you want in that Interfaxe as long as you convert the result into a neotest.Tree
.
I think it can be generic. Treesitter just seems to be the most common way of doing this so far. But other than that, you just have to fulfill a specific interface/Tree for the adapter. I don't think neotest
actually cares how you do it, just the interface/structure of your adapter.
Just wanted to let you know I've added neotest-scala adapter for neotest
. It uses treesitter
to discover positions and sbt/bloop CLI to run tests. But in case of debugging it uses nvim-dap
to start the debug session, it still leverages treesitter
to discover test positions and pass it to debug adapter (without using test discovery endpoint from metals).
I think that now there is a neotest extension and we also have a built-in test explorer now via https://github.com/scalameta/nvim-metals/pull/474 we should be good to close this. We can definitely work on these to improve them, but I think the core is there 🚀