pygls
pygls copied to clipboard
Any example on how to use semantic tokens?
When I try to use @server.feature(TEXT_DOCUMENT_SEMANTIC_TOKENS) I see in the logs that it registers it as a provider, but nothing gets to this function.
When I try to use @server.feature(TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL) or any of the other 2 options, I don't see that it's getting registered as server capabilities.
Can you provide an example of how to make it work?
I haven't used it yet. There might be a bug, but not sure when I will have time to take a closer look.
Did you find the solution in the meantime?
not yet..
I've had a play around with the example json language server in this repo and I've managed to get something to work, though it appears that there may be a bug or two in pygls
. Here is the result with semantic tokens
vs the result without
In the json-extension example server code I added the following, hardcoded to work with the test.json
file in the screenshots above
@json_server.feature(
TEXT_DOCUMENT_SEMANTIC_TOKENS,
SemanticTokensOptions(
legend=SemanticTokensLegend(
tokenTypes=["class", "keyword"],
tokenModifiers=[]
)
)
)
def semantic_tokens(ls):
"""Used to signal to the client that we support semantic tokens."""
@json_server.feature(
TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL
)
def semantic_tokens_full(ls, params: SemanticTokensParams):
"""A 'full' semantic tokens request."""
return SemanticTokens(data=[
0, 1, 7, 1, 0,
0, 9, 5, 0, 0,
0, 6, 6, 1, 0,
0, 8, 8, 0, 0
])
Here's a quick breakdown of the above
- Tokens are sent to the client as a long list of numbers, each group of 5 numbers describe a single token.
- The first 3 numbers describe the token's line number, character index and length, relative to the previous token
- The final 2 numbers specify the type of token, the client uses them to index into the arrays given in the
SemanticTokenLengend
see the LSP Specification for full details
There does however, seem to be a few bugs in how pygls
fills out the ServerCapabilities
field for semantic tokens so in order to make the example above work, I had to make the following changes in pygls
itself.
diff --git a/pygls/capabilities.py b/pygls/capabilities.py
index c7191dd..7165d57 100644
--- a/pygls/capabilities.py
+++ b/pygls/capabilities.py
@@ -22,7 +22,7 @@ from pygls.lsp.methods import (CODE_ACTION, CODE_LENS, COMPLETION, DECLARATION,
TEXT_DOCUMENT_CALL_HIERARCHY_PREPARE, TEXT_DOCUMENT_DID_CLOSE,
TEXT_DOCUMENT_DID_OPEN, TEXT_DOCUMENT_DID_SAVE,
TEXT_DOCUMENT_LINKED_EDITING_RANGE, TEXT_DOCUMENT_MONIKER,
- TEXT_DOCUMENT_SEMANTIC_TOKENS, TEXT_DOCUMENT_WILL_SAVE,
+ TEXT_DOCUMENT_SEMANTIC_TOKENS, TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL, TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL_DELTA, TEXT_DOCUMENT_SEMANTIC_TOKENS_RANGE, TEXT_DOCUMENT_WILL_SAVE,
TEXT_DOCUMENT_WILL_SAVE_WAIT_UNTIL, TYPE_DEFINITION,
WORKSPACE_DID_CREATE_FILES, WORKSPACE_DID_DELETE_FILES,
WORKSPACE_DID_RENAME_FILES, WORKSPACE_SYMBOL,
@@ -237,6 +237,13 @@ class ServerCapabilitiesBuilder:
),
))
if value is not None:
+ value.range = TEXT_DOCUMENT_SEMANTIC_TOKENS_RANGE in self.features
+
+ if TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL_DELTA in self.features:
+ value.full = {"delta": True}
+ else:
+ value.full = TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL in self.features
+
self.server_cap.semantic_tokens_provider = value
return self
diff --git a/pygls/lsp/__init__.py b/pygls/lsp/__init__.py
index 905f830..9405f53 100644
--- a/pygls/lsp/__init__.py
+++ b/pygls/lsp/__init__.py
@@ -107,6 +107,11 @@ LSP_METHODS_MAP = {
MonikerParams,
Optional[List[Moniker]],
),
+ TEXT_DOCUMENT_SEMANTIC_TOKENS: (
+ Union[SemanticTokensOptions, SemanticTokensRegistrationOptions],
+ SemanticTokensParams,
+ Union[SemanticTokensPartialResult, Optional[SemanticTokens]],
+ ),
TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL: (
Union[SemanticTokensOptions, SemanticTokensRegistrationOptions],
SemanticTokensParams,
I'm not entirely sure however, if the example above is how pygls
intends the semantic tokens feature to be used but I'm happy to open a PR with these changes if that would be useful :smile:
@alcarney It would be helpful to me as I got the same missing feature when trying to use it. I think it's from 3.16 LSP, so I don't know if it will require something special additional to modify.
So taking another look at this, there's actually a way to make this work today without having to change any of the pygls
internals. All the machinery for semantic tokens is already there but the bug around the server capabilities prevents it from being enabled. Here is where the server capabilities get stored, if we subclass the LanguageServerProtocol
class we can replace the server_capabilities
attribute with a @property
and do whatever processing we like on it, for example
from pygls.lsp import ServerCapabilities
from pygls.protocol import LanguageServerProtocol
class Patched(LanguageServerProtocol):
"""A patched version of the language server protocol, allowing us to tweak
how the `semantic_token_provider` field is computed."""
def __init__(self, *args, **kwargs):
self._server_capabilities = ServerCapabilities()
super().__init__(*args, **kwargs)
@property
def server_capabilities(self):
return self._server_capabilities
@server_capabilities.setter
def server_capabilities(self, value: ServerCapabilities):
if TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL in self.fm.features:
opts = self.fm.feature_options.get(TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL, None)
if opts:
value.semantic_tokens_provider = opts
self._server_capabilities = value
Also, make sure that if you have a custom language server class that you pass down any additional args to the base class
class JsonLanguageServer(LanguageServer):
...
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
But with that in place, my example from above can be rewritten as
# Be sure to pass your patched protocol class to the language server's constructor
json_server = JsonLanguageServer(protocol_cls=Patched)
# No need for the dummy 'TEXT_DOCUMENT_SEMANTIC_TOKENS' feature anymore
@json_server.feature(
TEXT_DOCUMENT_SEMANTIC_TOKENS_FULL,
SemanticTokensOptions(
legend=SemanticTokensLegend(
tokenTypes=["class", "keyword"],
tokenModifiers=[]
),
full=True
)
)
def semantic_tokens_full(ls, params: SemanticTokensParams):
"""A 'full' semantic tokens request."""
return SemanticTokens(data=[
0, 1, 7, 1, 0,
0, 9, 5, 0, 0,
0, 6, 6, 1, 0,
0, 8, 8, 0, 0
])
Note that I still intend to open a PR with the server capabilities fix, but at least with this workaround you can try semantic tokens out without waiting for a new pygls
release :smile:
@alcarney Thanks! I'll be using that. It prevents patching of pygls inside the site-packages directory.
@alcarney Thanks for showing a workaround. It would be really helpful if you could open a PR with the fix. Sorry guys, I am pretty busy but I will merge PR ASAP when it's there.
Did #213 fix this? Or where are we with this?
The json-extension now includes an example on semantic tokens, so it's probably safe to close this.
Oh right, great, thanks.