Xiang Zhang
Xiang Zhang
你是直接运行打好的镜像还是自己构建的? 下面是我运行的截图,是有 hue 目录的: 
ok
感谢关注。 的确不同发行版的linux使用起来会有所差异,但是我需要确认一下是否有必要,因为就我的使用来说,我一般会使用hdfs的上传,kafka的生产消费,spark和flink任务的提交,这些功能本身是跨发行版的(即命令在centos和ubuntu上都是一样的),所以想知道你是哪里使用不便。 另外,这个镜像是基于 hue 的官方镜像构建的,hue的官方镜像是基于ubuntu的,如果要切换成 centos的,hue的集成可能会有点困难。(hue的镜像构建比较复杂。)
Sorry for this late reply, do you mean you need include solr in this image ?
Sorry, I have never used Solr before. You are very welcome to add it by yourself.
这个问题看起来和镜像本身关系不大,你可以参考一下: https://stackoverflow.com/questions/60209172/docker-cannot-delete-intermediate-images-from-a-broken-pull
感谢指出
请问现在XLearning支持Kerberos认证吗?
> Try this branch which works for me. > [zjffdu@47a50c0](https://github.com/zjffdu/zeppelin/commit/47a50c0ae73c9fc8c5441572cee0d9164106a60f) > > And set spark.app.name to be empty in interpreter setting page first. @zjffdu Is this branch enough to achieve...
@zjffdu I believe the commit I mentioned is updating `SparkInterpreterLauncher`: ``` if (!sparkProperties.containsKey("spark.app.name") || StringUtils.isBlank(sparkProperties.getProperty("spark.app.name"))) { sparkProperties.setProperty("spark.app.name", context.getInterpreterGroupId()); } ``` here `spark.app.name` is set to `interpreterGroupId` which is generated as...