Tracking issue for performance improvements
I'd like to make a series of performance improvements to expert and thought it'd be good to get a list of major pain points/ issues
-
[ ] Completion: This should be near instant (<200ms) even on slow machines and larger projects currently on my slow laptop it's(300-600ms)
-
[ ] startup time:
- [ ] a lot of time is spent applying namespace when rebuilding the engine (it rebuilds every start for me, not sure if it should)
- [ ] After startup, to get the first completion, the first call to Modulestore.build() in elixir sense takes around 20s subsequent calls are much more reasonable.
I haven't noticed anything else being frustratingly slow. But my usage is limited and sometimes it's hard to tell if a feature is broken or just extraordinarily slow (like go-to def)
For completions, you might research how Next LS did it.
Completions there were never slow and had no startup time.
However, there were some features I never implemented that ElixirSense does, like behaviour callbacks and Types.
Ahh, good point of reference, thanks. Completion is slow because elixir sense is setup in a way that is slow and does a bunch of things that are very slow. I've experimented with fixing that and adding caching and got a 4-6x improvement, but it's actually still doing a bunch of redundant work.
From a quick skim Nextls is much simpler than elixir sense and might be a better starting point for a completion rewrite.
I'll do some experiments and keep you all informed.
For context, Next LS's implementation uses IEx's completion code (which i think maybe Elixir Sense was also based on originally) and also uses Spitfire to create the environment (imported functions, aliased modules, etc)
I took a look at nextls, and I agree. It's much much better. The performance is out of the box basically perfect. Not surprising given the code is just doing so much less and basically has all the optimisations is want do to elxirsense.
I think using the nextls design as a base and just porting the extra elxirsense features over would be the way to go.
Given plans to integrate Spitfire. Maybe it's easier if we do that first then do completion basically starting from your nextls code.
In my own little testing, interestingly errors are updated much more quickly with expert then nextls and nextls use a huge amount of cpu resources(maxes out all my cores for 5s) after I make a change whereas expert uses almost none
Have you benchmarked Spitfire? Is it possible it's very resource hungry compared to the parser we have?
In expert the diagnostics are reported faster because it compiles as you type, and I believe it's compiling just that one file.
Next compiles on save using the normal compiler.
Spitfire is just used to parse ast for document symbols and completion environment