CudaText
CudaText copied to clipboard
LSP features are not working on other half after "split tab" action.
LSP features are not working on other half after "split tab" action.
- install and configure cuda_lsp addon
- open some file to test LSP features
- ensure they are working fine
- use "split tab: split horizontally" action in Command palette
- try to use any LSP features on other half
- fail
Python IntelliSense plugin-- works ok in other half. so, it is the LSP plugin issue, not Cud's. I need to check it uses "ed" properly. "ed" object is always the focused editor.
Related APIs: ed.get_prop / ed.set_prop with PROP_HANDLE_SELF, PROP_HANDLE_PRIMARY, PROP_HANDLE_SECONDARY.
I searched the LSP plugin code but failed to find the reason. I just found that internal LSP-msg-handler is not called at all when we are in the secondary editor. This msg-handler calls the ed.complete_alt() API, only for primary editor.
handler is here:
file language.py, def _on_lsp_msg.
problem is in on_complete event.
@command
def on_complete(self, ed_self):
doc = self.book.get_doc(ed_self)
if doc and doc.lang:
return doc.lang.on_complete(doc)
book.get_doc will return None, because there is dictionary called docs where keys are the filenames and values are docs with ed member (editor associated with that doc).
since editor is not the same on the other half of split - it will return None instead of doc.
the simple solution I came up with looks like this:
diff --git a/lsp.py b/lsp.py
index c7660d9..3f93953 100644
--- a/lsp.py
+++ b/lsp.py
@@ -306,6 +306,7 @@ class Command:
lang.update_tree(doc)
def on_focus(self, ed_self):
+ self._do_on_open(ed_self)
doc = self.book.get_doc(ed_self)
if doc and doc.lang and doc.lang.tree_enabled:
doc.lang.update_tree(doc)
so every time we click on other half of split it will trigger _do_on_open where then doc will be created (new_doc) for this editor and replace old doc for this file in docs dictionary.
BUT...
then I see a memory leak when switching focus from half to half. and I don't know why.
i think it's because of text_document.dict() in did_open function of client.py (because replacing it wih None stops the leak)

but i'm not sure i know how to debug it correctly. maybe it's not fault of text_document.dict() and i'm looking in the wrong place? maybe it just sends notification (textDocument/didOpen) correctly and then leaks on response..?
puzzled.
do you have experience with debugging memory leaks like these?
PS: we can think of another way to fix this Cuda_LSP split tab issue as well.
do you have experience with debugging memory leaks like these?
it's strange that Py script gives leak. all leaks must be handled by Py GC, no? if they are not, we need to write del <object>.
https://habr.com/ru/post/417215/
it's programmers (mine) fault not python GC's. because my fix actually looks like a hack: we are doing on_open inside on_focus and not calling on_close anywhere afterwards.
must think of a different way to handle this situation.
(also we must not forget about differ plugin, it opens second half with different filename/filepath to compare.) so it will be two records in docs dictionary instead of one.
for Differ plugin:
Editor.get_prop(PROP_EDITORS_LINKED, '') : enable sharing of the same text by primary/secondary editors in file tab.
because my fix actually looks like a hack: we are doing on_open inside on_focus and not calling on_close anywhere afterwards.
for me, your fix looks normal. API don't tell us to call any on_close.
it grows +150-300 kb everytime you switch focus between halfs. do you see that on linux?
linux HTOP app cannot tell about +200kb

RES column? 63004 bytes?
VIRT column: 510M (includes LSP server mem)
"VIRT represents how much memory the program is able to access at the present moment. RES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming." https://stackoverflow.com/questions/6036154/detect-memory-leak-with-htop
aha, sorry. ok, RES memory grows after on_focus on each half! by +20...+100.
we cannot solve it, you didnt try more, @veksha ?
not tried.