neotest icon indicating copy to clipboard operation
neotest copied to clipboard

Feature request: support for Google Test

Open alfaix opened this issue 1 year ago • 1 comments

Hello!

First of all, thanks for all the work put into this plugin, it's great and I never want to run tests in a separate tmux pane ever again :)

On to the issue: I'm working on creating an adapter for the Google Test framework for C++. It is not supported by vim-test, as far as I know, due to architectural difficulties. There is a separate plugin that ports that support, but the functionality is limited.

The plugin that I'm writing can be found in my repo. It works, but many features are still WIP, so I decided to open this issue in case somebody else wants to work on this (which would be much appreciated!), and to bring up some issues and suggestions.

So far I've only discovered a single issue: test discovery breaks down when opening a file outside the current project. If I'm working on /home/me/myproject/foo.cpp and go-to-definition in /usr/include/something.h, the whole filesystem tree gets parsed, which breaks neovim in a number of ways, from going over ulimit with open files to straight up freezing while trying to parse every test file it finds. The discovering mechanic seems way too eager to discover :) Similar behavior has already been mentioned here, and is probably being worked on, but if I can help, I would love to

Furthermore, if it's okay, I would also like to suggest a couple of minor improvements. If you think they are a good fit for the plugin, I think I can add them myself.

  1. Support canceling a run from build_spec. If an adapter returns nil from build_spec, an error happens. With google test, the adapter has to find the executable to run. Sometimes, that may require user input, and I would like to give the user an opportunity to cancel during that input (i.e., "enter path to the executable, empty to cancel"). Otherwise the user has to press <C-C> and see some errors they have nothing to do with.
  2. Consider supporting errors in a different file. Consider the following use case: A test runs a function in another file, that file throws an error. Do we want to put a diagnostic at the line that the error happened? Currently the only way to report this would be to display such an error in the test's short summary. However, printing that error in a different file could result in hundreds of test reporting the same error in that one file, so maybe it's best left as is.
  3. Keeping previous results would be helpful (in a text file somewhere). I think pytest does this best, creating /tmp/pytest-of-username/pytest-run-<counter> directory. I implemented something similar myself for google test (code here), perhaps it would be generally useful? I sometimes check old test runs to see when did it all go so wrong.
  4. Providing an interface for an adapter to store a persistent state to disk would be nice. Adapters may want some tiny state. Of course, they can store it themselves under stdpath('data'), but it would be nice to store all test states in a centralized fashion. My particular adapter wants to store a simple JSON associating test files to executables they are compiled into.

Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g., @pytest.mark.parametrize in pytest or TEST_P in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?

Sorry for jamming all this into a single issue, if you think any of these should be worked on, I'll create separate ones.

alfaix avatar Jul 07 '22 14:07 alfaix

Hey thanks for the report/suggestions! Great to hear you're writing an adapter :grin: GTest has been one that's always evaded my support with vim-ultest as well https://github.com/rcarriga/vim-ultest/issues/104#issuecomment-1046296065.

test discovery breaks down when opening a file outside the current project. If I'm working on /home/me/myproject/foo.cpp and go-to-definition in /usr/include/something.h, the whole filesystem tree gets parsed

That shouldn't happen AFAIK, the only case that would happen for would be if the adapter said that the file is a test file. Is that what's happening? If you can grab the logs this should help diagnose if that's not the case.

  1. Support canceling a run from build_spec.

This makes sense though it doesn't make sense to use nil to cancel because right now, if an adapter returns nil for a dir, file, namespace, it will try and break up the positions to their children to run. This is how neotest-vim-test works. So then your user would be prompted once for each test in a suite if they ran a suite and cancelled. Instead I've added some error handling around build_spec so that if an error is thrown, a user will be notified using vim.notify with the error message. So you can write No executable provided or something along those lines.

  1. Consider supporting errors in a different file.

Yep that's something I've thought about (though adapters would probably still want to provide error messages for the test file as there's no guarantee that the file is open/visible), I'd just add path to neotest.Error and then the diagnostic consumer just needs to be updated to handle that. Happy to take PRs for that :smile:

However, printing that error in a different file could result in hundreds of test reporting the same error in that one file, so maybe it's best left as is.

The diagnostics consumer could handle that with a limit per line/buffer? Don't think that'd be a blocker anyway

  1. Keeping previous results would be helpful (in a text file somewhere).

Interesting idea, I'd probably keep that as functionality solely in the output consumer which is quite simple so I don't mind making a bit more complex. Thinking something like a version/run_count argument that could be like a python list index e.g. -1 gives last, -2 second last, 0 first etc. That's completely off the top of my head though so totally open to ideas on interface.

  1. Providing an interface for an adapter to store a persistent state to disk would be nice.

I'm not sure there's much point in neotest providing a centralised location rather than adapters just using stdpath("data") as you say. It seems like this would have to be arbitrary data from an interface perspective so I don't see the advantage of it.

Finally, I need some guidance with parametrized tests: is there a definitive way to work with them? E.g., @pytest.mark.parametrize in pytest or TEST_P in Google Test. This is really multiple tests masquerading as one and I'm not sure how to report them - should they be different nodes in the tree? Should it just be one test and whenever it's run an adapter should report that all errors happened in that one test?

So this has been raised for neotest-go as well. neotest-pytest will just aggregate each result into the single result (this is one of the reasons a single result can have multiple errors) and then display all outputs together for the "short" output. The neotest-go author had discussed attempting to parse the parameters with treesitter since they are usually static and and so can be handled by treesitter (vscode-go does this to some extent https://github.com/nvim-neotest/neotest-go/pull/1#issuecomment-1152874868). However this is getting into territory where treesitter may be a bit limited and you might want to instead use the test runner discovery if possible (or LSP). I may do that for pytest as neotest-pytest already integrates with it and so can find that information but it could be a lot slower (pytest discovery isn't even close to treesitter speed)

rcarriga avatar Jul 11 '22 21:07 rcarriga