There are too many Spark sessions and the session timeout parameter is not effective
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
Search before asking
- [X] I have searched in the issues and found no similar issues.
Describe the bug
When using kyuubi to start a spark SQL ON K8S cluster, the number of internal sessions increases. The parameter kyuubi.engine.user.isolated.spark.session.idle.timeout=PT30M is found in the documentation. The default value is 6H. After changing it to 30M, there are still sessions that exceed 30M and are not closed, and the SQL of the session has been executed. I would like to ask whether the creation of this session is based on a SQL or how it is created. I found that there are not so many SQLs in my cluster, but there are many sessions. I don't know how they are created.
Affects Version(s)
1.9.1
Kyuubi Server Log Output
No response
Kyuubi Engine Log Output
No response
Kyuubi Server Configurations
No response
Kyuubi Engine Configurations
No response
Additional context
No response
Are you willing to submit PR?
- [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi community to fix.
- [ ] No. I cannot submit a PR at this time.
you should read the docs carefully, the docs of kyuubi.engine.user.isolated.spark.session.idle.timeout say
If kyuubi.engine.user.isolated.spark.session is false ...
and the default value of kyuubi.engine.user.isolated.spark.session is true, further, the docs of kyuubi.engine.user.isolated.spark.session say
... if the engine is running in a group or server share level ... Note that, it does not affect if the share level is connection or user.
while the screenshot shows you are using USER share level
for your cases, maybe you should change kyuubi.session.idle.timeout
you should read the docs carefully, the docs of
kyuubi.engine.user.isolated.spark.session.idle.timeoutsayIf kyuubi.engine.user.isolated.spark.session is false ...
and the default value of
kyuubi.engine.user.isolated.spark.sessionistrue, further, the docs ofkyuubi.engine.user.isolated.spark.sessionsay和默认值kyuubi.engine.user.isolated.spark.session是true,此外,文档kyuubi.engine.user.isolated.spark.session说... if the engine is running in a group or server share level ... Note that, it does not affect if the share level is connection or user.
while the screenshot shows you are using USER share level
Thank you very much. Can I understand it this way? When using the user sharing level, kyuubi.session.idle.timeout is used to control the session timeout. When using the group or service sharing level, kyuubi.engine.user.isolated.spark.session.idle.timeout is used to control the session timeout. Whether it is spark, flink, or trino engine
kyuubi.session.idle.timeout可以放到创建session的rest请求中,针对某个特定的session设置吗
@holiday-zj no, yet. it's a server-side configuration that the client can not override, but the behavior can be discussed, in a quick thought, we can make changes to support that.
我们支持reset idle timeout吗?好像只在执行某些操作时会刷新lastAccessTime和lastIdleTime