Darek

Results 110 comments of Darek

@JenniferOH This code seems to work: ``` import org.apache.spark.sql.SparkSession val spark = SparkSession.builder.config("spark.sql.legacy.createHiveTableByDefault", "false").master("local[*, 4]").appName("Spark").getOrCreate() ``` but this repo is dead so I would suggest posting your questions on SO.

@mathbunnyru Since we decided to close the PR, this ticket should be closed as well. Thx

[databricks/spark-deep-learning](https://github.com/databricks/spark-deep-learning) only contains HorovodRunner code for local CI and API docs. Databricks Runtime for Machine Learning is necessary to run it. @kietly I don't think this ask is relevant to...

Why do you think tornado and nest-asyncio fixes the issue? Do later version of tornado and nest-asyncio work as well? Thx

This is a jupyterlab [issue](https://github.com/jupyterlab/jupyterlab/issues/11934#issuecomment-1209235465) which will be fixed in the future. If you need it sooner please submit a jupyterlab PR based on this [issue](https://github.com/jupyterlab/jupyterlab/issues/11934).

There seems to be a [PR](https://github.com/jupyter/jupyter_client/pull/823) for this issue already.

I think this question belongs to SO, not GH. This [link](https://stackoverflow.com/questions/34515598/apache-spark-shell-error-import-jars) should answer your question.

@eromoe Scala notebooks are no longer supported in this stack, if you need Scala support please use [almond](https://github.com/almond-sh/almond) Thx

Do you think some users are expecting the PDF functionality in the `minimal-notebook`?

@benz0li I would suggest creating a PR and see what the community reaction is to this change.