jupyterlab-sql-editor icon indicating copy to clipboard operation
jupyterlab-sql-editor copied to clipboard

Not clear how to run this on windows for jupyterlab 3 or 4

Open wysisoft opened this issue 1 year ago • 1 comments

I am not clear how to run this inside jupyterlab 3 or 4 and actually get %%sparksql magic to work? I have both jupyterlab 4.2.4 and 3.6.7 running, without errors in the output or the tmp log file, but it says cell magic not found when i try %%sparksql magic

wysisoft avatar Aug 21 '24 04:08 wysisoft

Current version works with JupyterLab 4. I just released slightly improved logging for loading errors. In general you need to have sql-language-server installed with npm -g (node 20 recommended), node available in your notebook kernel's path and for Python you need jupyterlab-lsp>=5.0.0 jupyter-lsp>=2.2.0 python-lsp-server[all].

cccs-nik avatar Dec 03 '24 22:12 cccs-nik

Same. UsageError: Cell magic %%sparksql not found.

Image Image Image

Paradox137 avatar Mar 04 '25 21:03 Paradox137

I admittedly exclusively work on Linux but I'll give it a try on Windows and see how it goes. Your Jupyter logs have a good clue. I'll get back to you soon @Paradox137

cccs-nik avatar Mar 05 '25 21:03 cccs-nik

I made some changes in version 1.4.5 which should make it possible to use on Windows now. Let me know if it works for you or if you run into other issues @Paradox137.

cccs-nik avatar Mar 07 '25 21:03 cccs-nik

@cccs-nik hi, servers works, but... Py4JJavaError always on calls and syntax don`t highlight

Image

Image

Image

Image

Tag me if you need more log information, because I really don't even know what the problem could be, having tried many solutions on the net - nothing helps

Paradox137 avatar Mar 10 '25 11:03 Paradox137

Maybe i need again dawngrade python version but i don`t see: "RuntimeError: Python in worker has different version..." in logs

Paradox137 avatar Mar 10 '25 11:03 Paradox137

@cccs-nik any help here?

Paradox137 avatar Mar 15 '25 07:03 Paradox137

@Paradox137 Hey sorry was a little busy last week. I can't really tell from your exception what the issue is. The error message looks a bit generic, not sure if you have better logs somewhere else or not. First I would make sure you have compatible Java and Python versions for PySpark 3.5.5. That would be Python 3.8 to 3.11 and Java 8/11/17 if I'm not mistaken. You probably need multiLine=True to your json read given your schema says _corrupt_record: df = spark.read.json("sample1.json", multiLine=True). Syntax highlighting should be working given the LSP servers window you screenshotted.

cccs-nik avatar Mar 17 '25 16:03 cccs-nik