nvim-go
nvim-go copied to clipboard
Async go test run into Vim-loclist with help of -json flag
- using
go test -json
output format to fill vim's location list asynchronously - unit tests for relative path calculation, package root calculation, and a few JSON outputs to test the actual parsing
- fixing situations where sub-tests weren't run (Go's table test cases)
- integrated into the current notification system
- small updates to the current documentation
- future todo: maybe separate use of
-json
to dedicated flag, etc.
👍🏻 The function looks quite good. However, it would take more days for me to review because the code changes are huge :)
Thanks! And thanks for the great plugin. Minimalistic was exactly what I was looking for 👍
I did try to keep feature-related changes in their own files as much as possible, only changing those existing files I had to. That's the reason there might be some parts that could be common for the whole project but aren't yet.
I understand what this feature provide. However, it's like a brand implementation of go test commands. Is it possible to combine with the previous implementation?
Async could be a default feature while loclist output is yet another choice apart from popup window.
I understand what you mean. However, I had a few considerations even when I started to write the PR:
-
I don't want to parse
go test
output in the old way, i.e. without the help of the-json
flag. And for that reason I ended up to bind the loclist behaviour to go tools-json
flag. I did considered adding a new config flag like you have with the lint, i.e. tell where the test output should be send. This would be something we should do if we change the behaviour. (I have already thought about option to use both loclist and virtual text for test results. And I think they aren't exclusive, you could use both at the same time if you want.) -
I decided to let end-user to set
-json
flag by her self also because then we would get feedback if they wanted to use tools likegotestsum
,gotestfmt
, etc. which both use-json
but we should use unixtee
command to be able still parse failed errors to loclist. That's something that I thought would be easier to leave for user at this point. We have to start somewhere to get end-user feedback. -
I don't know if a popup (or modal window) is the correct one for async long-lasting tasks. Loclist is better and it's the original VIM way. How the lifecycle of the popup should work? User closes it in the middle of the job?
However, I'm still open for suggestions, but we should decide if it's OK to
force -json
flag on if isn't and plugin's config flag says that loclist
should be used for test results instead of popup.
I understand that you don't want to have overlapping features in the codebase,
and that's OK for me too. However, if going to go to async all-in in go test
maybe we should consider to use same approach to linting as well? If so, then
the code refactoring would be even more greater, because all external tools (go test
, linters
, etc.) could share most of the code?
Any update on this ?
I could return to work with this when I get answers to my previous questions. Just so you know, I have used my fork since. It works perfectly, and it's needed for repos that have long-lasting test runs.
The original idea was to first bring the async functionality as a new/separated code part, merge sync functionalities to linter, and make the default for the tests after receiving user feedback. But as I said, I'm open to suggestions.
@lainio Thanks for your response. I have tried your fork and this is what I expected! I agree that loclist is better way since failed test cases and results remains loclist and user can check and jump to the line while addressing them.
For configuration to switch behavior, I think adding new config like test_output
for choosing popup or loclist is more intuitive and easy to understand the behavior, also we can support other options for the config when we want to add other output style in future.
Could someone from the repo's maintainers tell me how to move forward?
As I have said in my previous comments, if we decide to change the whole test run system to async, it's a big deal to UX. The modal popups/dialogs/windows don't work well when the async/background task is running. The loclist and virtual text are perfect for that, but maybe they (as well) only together 🤔
How should the modal, end-result popup window work in the async test run? Think about a case where your test run takes >10 sec. Should everything be blocked during that, or should the popup brought up when the results are ready? (How disturbing that could be.)
For that reason, test_ouput=popup
is almost impossible from the UX point of view with the async/background test runs if someone invents a spec on how to do that.
FTR, I'm eager to hear any suggestions for the UX design (including configs) and how to merge the code. What's good with the current PR is that it's mostly only adding new code and minimal merge conflicts.