sdk
sdk copied to clipboard
Code completion when editing a macro feels slow
Now that I have had a chance to play with macros, I have noticed that when editing a macro itself, it feels pretty sluggish.
It feels to me like the compilation of the macro is blocking regular code completion and other things. This means any time you edit the macro all errors and code completion features take a few seconds to work.
Would it be possible to offload the compilation to a different isolate? Or is something else going on?
"A few seconds" seems too much. I wonder if it actually crashes and restarts. Do you have anything in ~/.dartServer/.analysis-driver/ex? I have not tried myself yet.
Yes, running macros is a part of linking summaries, and the analyzer is single threaded. The current design of the analyzer does not allow us to have multiple analysis operations running.
I do have some files under there but I am not seeing new ones appear as I continue to make edits. I am also not seeing vscode complain about analyzer crashes.
What specifically do you edit, so that you observe the slowness?
What specifically do you edit, so that you observe the slowness?
Really any edit within a macro implementation. So for example when you start writing a macro, I used the quick fix to add the override for the interface I was implementing. Then I go into the body and type builder.. It takes a while to get any autocompletions for this.
Similarly, once I finished that line if I add another builder. line after that, it takes a while again.
FWIW, I was not able to reproduce it so far, with the previous version of JsonSerializable, copied into my test package.
FWIW, I was not able to reproduce it so far, with the previous version of
JsonSerializable, copied into mytestpackage.
Hmm ok, I was just editing the observable macro in the language repo (under working/macros/example).
Note that it is important to make edits you haven't made before, I noticed that going back to previous states of the file seems to be much faster (I think something is caching results)
True, we do cache both compiled macro implementation in form of kernels, and resulting linked element models. And I was trying to with new code.
I don't see how it could change multi-second delays. If compilation of a macro takes... I don't know, about 100 ms, and running it then maybe 30 ms, this is all delay that you should observe.
FWIW, I ran a performance measurement yesterday, mostly to see how much "beautifying" code will cost.
https://dart-review.googlesource.com/c/sdk/+/347500/2/pkg/analyzer/test/src/dart/resolution/macro_test.dart#570
The timings eventually slide under 700 ms, for running on 100 files, each with unique UserX class.
Note that we useEmptyByteStore(), so no caching between runs.
We run the series of 100 files multiple times, each time compiling to kernel, and then using 100 times.
And these 700 ms include both kernel compilation and running over 100 files.
OK, I measured it, compilation takes 120 ms.
So, you have to pay 120 + 10, under 150 ms, not multiple seconds.
Eventually, when everything it hot.
Hot
[i: 20]
[KernelCompilationService][timer: 113 ms]
[time: 642 ms]
Cold
[KernelCompilationService][timer: 178 ms]
[i: 0]
[time: 1216 ms]
So, my suspicion that it is crash and restart.
Ok, I do have other macros open which do fail to run, I wonder if it is related to that. I think if in general the expectation is in the low 100s of ms, that should not be noticeable (or at least would be fine, from a UX perspective, even if it is somewhat noticeable).
Fwiw, with the latest fixes to avoid the crashes I am getting better behavior for sure, but it is still definitely >1s, and I can hear my fan on my computer working hard 🤣 . My guess is we have N separate compilations happening here, one for each keystroke?
No, there should be exactly one compilation, we have a lock on KernelCompilationService.
And in general we have single execution flow in the analyzer.
This is still surprisingly slow. Maybe too old computer? :-)
No, there should be exactly one compilation, we have a lock on
KernelCompilationService.
Can we queue up a compilation for each edit possibly though? So it is actually doing back to back compiles for each keystroke?
Maybe too old computer? :-)
It is not crazy-fast per core, idk the exact specs (remote machine). It has 128 cores it can just only use one! lol
No, there should be exactly one compilation, we have a lock on
KernelCompilationService.Can we queue up a compilation for each edit possibly though? So it is actually doing back to back compiles for each keystroke?
We accumulate file change requests, which will wait until the the current analysis operation is done. Then we apply all changes to files, and take the next analysis operation. Which might cause a kernel compilation. But it is just one compilation for all changes.
Maybe too old computer? :-)
It is not crazy-fast per core, idk the exact specs (remote machine). It has 128 cores it can just only use one! lol
Ah, it might have slow cores though. That was my experience recently - my Mac is faster per core.
It is slower per core than my mac for sure - but not that much slower 🤣 .
~~I had an unused import (of an entire 3rd party library) in a file with a macro that I was editing.~~ ~~Now that I have removed this import LSP refresh is a lot faster, went down from 8-10s to 0.5s (I'm on AMD 7950x3d)~~ ~~It looks like importing a ton of code will significantly slow down the LSP.~~
I'm not sure what helped and why it is slow again. I'm back to square one where each code edit takes 10s for LSP to do its thing. It's like that in both VSCode and Neovim.
Is there anything I can check/provide to help debug this?
I did make one optimization late last week that should help some, we spent in general some time trying to look into where the time is going here and much of it was in the file system access (which actually has a layer of indirection though a socket connection).
I think that the real key here will be getting incremental compilation set up, currently it does a whole world recompile of macros each time they change. Especially given that analyzer already knows when files change, it can efficiently do these incremental edits. But I think there is a fair bit of plumbing that needs to happen for that to work.
A different way we could do this: when we move to pubspec-specified macros
https://github.com/dart-lang/language/issues/3728
we could also switch to building macros only outside the CFE/analyzer, i.e. a package would have a "build macros I use" step that could for example run on "pub get" that builds them and stores them somewhere under .dart_tool.
As part of that change "rebuilt macros here" would be something that you have to explicitly trigger if you are editing a macro, rather than it happening as you type.
This seems like a better user experience--compiling, running, analyzing and giving feedback from all users across the current package is not just slow, it's also noisy :)
Might be confusing for users: I changed the macro, but nothing updated.
If taken to extreme, we should not do analysis as you type too.
Want to see errors and warnings? Run dart analyze on the command line :-P
Yes, there's certainly potential for confusion.
UX is hard ;)
Already working on small examples I've been forced to create a smaller analysis context that excludes users of the macro, because the recompiles are slowing things down too much; I guess we'll want something better before launch.
Agreed with "P2" though, let's solve all the other problems first :)
One thing I really like about macros is being able to have the augmentation file for an example open, while I am writing the macro. Yes, it is somewhat slow to update today, but it is nice nonetheless and a good feature.
The part that is problematic here imo, is that the analyzer is blocking all other analysis on an asynchronous task (compilation of the macro). If we could instead handle the tasks which are not blocked on that macro running, such as autocomplete in the macro library itself, that would be ideal.
That is theoretically possible, but as Konstantin mentioned elsewhere would require some work to change the way we're currently analyzing code. I don't know how big that effort would be, so I don't know the extent to which that work would impact other efforts. We'd need to sketch out a design first so that we can make some estimates.
Yep that is fine, and as mentioned earlier I would treat this as a P2, no need to immediately fix as long as we think there is a path to making it better.