[Bug]: AMS get get hang after few days with kuberntes deployment
What happened?
Ams get hang after few days runing in kubernetes environment.
Affects Versions
master
What engines are you seeing the problem on?
AMS
How to reproduce
- Deploy the ams into kurbernetes environment.
- Ams memory set Xms=8196m, XMX=8196m
- Kurbernetes deploy set the resource limit to 4 core, 16GB memory.
- After few days, the pod memory will continue increase, when it exceed the limit ot kurberntes deployment, the ams will and hang and get restart.
Relevant log output
N/A
Anything else
relate jmap:
jmap heap
native memory summary:
Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
The memory usage of the Class Metadata (1GB) and Internal (2GB) sections seems to exceed expectations by a significant amount.
-
Internal(2GB) In NMT, the Internal memory is allocated using ByteBuffer.allocateDirect, which represents off-heap memory. The excessive memory overhead in this section may be due to the introduction of Netty in the project.
-
Metadata(1GB) The high memory consumption of the Class Metadata could be a result of the JVM's default setting, UseCompressedClassPointers. For more information, you can refer to this link: https://mp.weixin.qq.com/s?__biz=MzkyNTMwMjI2Mw==&mid=2247489471&idx=1&sn=11fb0d30c8bcea50b1bda0b1cd2a6595&chksm=c1c9fb27f6be72314b28d163146ecaa11b298ac9856625cfdc8909d464be6d8b220ef615fd69&scene=178&cur_album_id=2192236085064302593#rd
@zhoujinsong Should we introduce these two parameters in the JVM arguments of AMS to control off-heap memory size?
-XX:MaxDirectMemorySize
-XX:MaxMetaspaceSize