elasticsearch icon indicating copy to clipboard operation
elasticsearch copied to clipboard

High memory pressure for Elasticsearch versions using JDK 20+

Open jonathan-buttner opened this issue 1 year ago • 32 comments

Elasticsearch Version

8.7.1 and above, 7.17.10 and above

Installed Plugins

No response

Java Version

JDK 20 or above.

The issue depends on the JDK version rather than the ES version.

For reference, these are the bundled JDK versions (see Dependencies and versions section in Elasticsearch docs per stack version):

Stack  - JDK
-----    ---
7.17.10 - 20.0.1+9
7.17.11 - 20.0.1+9
7.17.12 - 20.0.2+9
7.17.13 - 20.0.2+9
7.17.14 - 21+35
7.17.15 - 21.0.1+12
8.7.1   - 20.0.1+9
8.8.2   - 20.0.1+9
8.9.0   - 20.0.2+9
8.9.1   - 20.0.2+9
8.9.2   - 20.0.2+9
8.10.0  - 20.0.2+9
8.10.1  - 20.0.2+9
8.10.2  - 20.0.2+9
8.10.3  - 21+35
8.10.4  - 21+35
8.11.0  - 21.0.1+12
8.11.1  - 21.0.1+12

The last stack version that allows the escape hatch workaround of re-enabling the disabled JVM setting (-XX:+UnlockDiagnosticVMOptions -XX:+G1UsePreventiveGC - see Problem Description below) are:

  • Elasticsearch v8.10.2 for 8.x branch
  • Elasticsearch v7.17.13 for 7.x branch

since versions after this one bundle JDK21 or later that has the setting removed from the JVM entirely. Adding or leaving the JVM setting in place and starting/upgrading to 8.10.3+ will fail to start the JVM due to the unknown setting in the later JDK versions.

Switch to non-bundled JDK < 20 (when possible) should also function as a workaround.

OS Version

N/A

Problem Description

In 8.7.1 the bundled JDK was changed to JDK 20.0.1 in this PR: https://github.com/elastic/elasticsearch/pull/95373

When retrieving large documents from Elasticsearch we see high memory pressure on the data node returning the documents.

There seems to be a distinct difference in how allocated memory is cleaned up between JDK 19 and JDK 20.

The graphs below show memory usage when allocating many ~5mb byte arrays to transfer a ~400 mb pytorch model from a data node to an ML node. The pytorch model is chunked and stored in separate documents that are retrieved one at a time by the ML node. When repeatedly allocating these large arrays, we see memory pressure distinctly increase in JDK 20.0.1. The graphs show memory usage over time when repeatedly starting and stopping the pytorch model.

Memory pressure for 8.7.0

image

Memory pressure for 8.7.1

image

Here's one from using VisualVM to monitor heap usage using a local deployment

image

Memory pressure for 8.7.1 when using the -XX:+UnlockDiagnosticVMOptions -XX:+G1UsePreventiveGC options

If the data node is started with the JVM options enabled we see memory usage closer to what it looks like in JDK 19

image

Steps to Reproduce

The issue can be reproduced easily in cloud but I'll describe the steps for running elasticsearch locally too.

Setup

Cloud In cloud deploy a cluster with

  • 2 zones and 4 GB data nodes
  • 2 zones and 4 GB ML nodes
  • Enable monitoring so you can see the heap usage

Locally

Run two nodes (1 data node, and 1 ML node).

Download and install 8.7.1 https://www.elastic.co/downloads/past-releases/elasticsearch-8-7-1 Download and install 8.7.1 of kibana https://www.elastic.co/downloads/past-releases/kibana-8-7-1 Download and install eland https://github.com/elastic/eland

An easy way to run two nodes is simply to decompress the bundle in two places.

Configuration

  • Create a file under config/jvm.options.d and add the following JVM options for both the data node and ML node
-Xms4g
-Xmx4g
  • Add the following settings to the data nodes config/elasticsearch.yml file
node.roles: ["master", "data", "data_content", "ingest", "data_hot", "data_warm", "data_cold", "data_frozen", "transform"]
xpack.security.enabled: true
xpack.license.self_generated.type: "trial"
  • Add the following settings to the ml nodes config/elasticsearch.yml file
node.roles: ["ml"]
xpack.security.enabled: true
xpack.license.self_generated.type: "trial"
  • Reset the elastic password on the data node

From bin

./elasticsearch-reset-password -i -u elastic --url http://localhost:9200
  • Create a service token for kibana

From bin

./elasticsearch-service-tokens create elastic/kibana <token name>
  • Add this token to kibana's kibana.yml file
elasticsearch.serviceAccountToken: "<token>"

and ensure that these settings are disabled elasticsearch.username and elasticsearch.password

  • Start both elasticsearch nodes and kibana
  • Connect VisualVM to the data node and observe memory usage over time

Reproducing the bug in cloud and locally

  • Upload a pytorch model of around ~400 mb

Locally

docker run -it --rm --network host elastic/eland \
    eland_import_hub_model \
      --url http://elastic:[email protected]:9200/ \
      --hub-model-id sentence-transformers/all-distilroberta-v1 \
--clear-previous

For cloud

docker run -it --rm elastic/eland \
    eland_import_hub_model \
      --url https://elastic:<cloud password>@<cloud es url>:9243 \
      --hub-model-id sentence-transformers/all-distilroberta-v1 \
--clear-previous
  • Repeatedly start and stop the uploaded model (20 - 30 times)
    • Navigate to Machine Learning -> Trained models
    • Click the start and stop buttons repeatedly
  • Observe memory usage over time

Logs (if relevant)

No response

jonathan-buttner avatar Sep 14 '23 18:09 jonathan-buttner

One key issue is that the particular task that @jonathan-buttner is executing is one that searches over 10s to 100s of docs, each about 1MB in size.

They grab the source (about 1MB in size), stream to a separate process and then search for the next doc.

So, within a second, many 1MB byte buffers are being created and dereferenced.

While we should be better about this (e.g. https://github.com/elastic/elasticsearch/issues/99590), the GC collection not happening until we are at critical mass is troubling.

The real-memory circuit breaker does protect us mostly, but GC needs to be better here.

benwtrent avatar Sep 14 '23 19:09 benwtrent

Pinging @elastic/es-search (Team:Search)

elasticsearchmachine avatar Sep 14 '23 19:09 elasticsearchmachine

Pinging @elastic/es-core-infra (Team:Core/Infra)

elasticsearchmachine avatar Sep 14 '23 19:09 elasticsearchmachine

Ping @elastic/ml-core

benwtrent avatar Sep 14 '23 19:09 benwtrent

Just a note on the workaround of re-enabling preventative GC, it has been removed in jdk 21 (which is due to be released next week and we will be updating to shortly thereafter).

rjernst avatar Sep 14 '23 20:09 rjernst

Pinging @elastic/ml-core (Team:ML)

elasticsearchmachine avatar Sep 18 '23 09:09 elasticsearchmachine

Correct me if I'm wrong, but Java 21 exhibits better behaviour here ( less or no circuit breaker exceptions ), than that of Java 20.0.x.

ChrisHegarty avatar Sep 22 '23 15:09 ChrisHegarty

Correct me if I'm wrong, but Java 21 exhibits better behaviour here ( less or no circuit breaker exceptions ), than that of Java 20.0.x.

In my local testing I wasn't able to generate a CBE but memory pressure didn't look as good as Java 19

image

jonathan-buttner avatar Sep 26 '23 12:09 jonathan-buttner

I just tested in cloud using the latest 8.11.0-snapshot and I can get CBEs to occur when deploying an ML model around 400 MB. I used an older version of eland which does not leverage a recent fix which stores the model in 1 MB chunks.

jonathan-buttner avatar Sep 26 '23 14:09 jonathan-buttner

Just a note to confirm that the last stack version that allows the escape hatch workaround of re-enabling the disabled JVM setting (-XX:+UnlockDiagnosticVMOptions -XX:+G1UsePreventiveGC) is:

  • Elasticsearch v8.10.2 for 8.x branch
  • Elasticsearch v7.17.13 for 7.x branch

since versions after this one bundle JDK21 or later that has the setting removed from the JVM entirely. Adding or leaving the JVM setting in place and starting/upgrading to 8.10.3+ will fail to start the JVM due to the unknown setting in the later JDK versions.

Bundled JDK versions (see Dependencies and versions section in Elasticsearch docs per stack version):

Stack  - JDK
-----    ---
7.17.9  - 19.0.2+7
7.17.10 - 20.0.1+9
7.17.11 - 20.0.1+9
7.17.12 - 20.0.2+9
7.17.13 - 20.0.2+9
7.17.14 - 21+35
8.8.2   - 20.0.1+9
8.9.0   - 20.0.2+9
8.9.1   - 20.0.2+9
8.10.0  - 20.0.2+9
8.10.1  - 20.0.2+9
8.10.2  - 20.0.2+9
8.10.3  - 21+35
8.10.4  - 21+35
8.11.0  - 21.0.1+12

geekpete avatar Oct 19 '23 04:10 geekpete

Noting this appears to affect starting v7.17.10 (where bundled JDK allows -XX:+G1UsePreventiveGC reapply override). Starting v7.17.14 bundled JDK v21 no longer allows setting override. Where possible, users can switch to non-bundled JDKs.

stefnestor avatar Nov 09 '23 18:11 stefnestor

Right, so we backported the newer JDK to the still supported 7.17 branch and should probably include that in the stack to JDK version table above?

geekpete avatar Nov 09 '23 22:11 geekpete

The fix that worked around this problem for the situation where it was originally seen is https://github.com/elastic/eland/pull/605. That reduced the size of the chunked PyTorch model documents from 4MB to 1MB.

Obviously people are seeing the same underlying problem with different types of data than chunked PyTorch models. However, if any of these situations involve documents that are 2MB or bigger and it's possible to reduce the size of these documents below 2MB then doing so may help to avoid the problem. The reason I am guessing 2MB is the cutoff point is that in heap dumps from nodes that have crashed due to this problem we have observed large numbers of strange unreferenced int arrays that are all exactly 2MB in size. These may be related to something that was changed in Java 20 garbage collection. The fact that reducing chunk size from 4MB to 1MB in Eland made these mysterious arrays go away for that particular use case is what makes me guess that memory chunks bigger than 2MB are the problematic ones. This is only a guess though - I could be wrong but thought I'd mention it just in case it's useful to somebody.

droberts195 avatar Nov 10 '23 16:11 droberts195

Awareness of this issue has been raised to OpenJDK. Specifically a comment summarising and referring to this GH issue has been added to several of the JDK issues, e.g. see https://bugs.openjdk.org/browse/JDK-8297639

ChrisHegarty avatar Nov 13 '23 12:11 ChrisHegarty

hi @ldematte. The "High Memory Pressure due to a GC JVM setting change" comment seems to be missing from the release notes of 8.11.1, no?

ewolfman avatar Nov 27 '23 12:11 ewolfman

You are right @ewolfman: due to bad timing (updating docs at the same time 8.11.1 underwent an emergency release) the known issue has not been picked up by 8.11.1 release notes. Thanks for spotting this, I have backported this manually.

ldematte avatar Nov 27 '23 16:11 ldematte

What is the workaround for version 8.11.1 release?

This workaround is limited to version <8.10.3 mentioned here.

v8.11.1 support matrix shows compatibility with just

  • Oracle/OpenJDK**/Temurin 17
  • Oracle/OpenJDK**/Temurin 21

With the upgrade to 8.11.1 for another (security) issue but due to this issue, we need to know summary of impact, workaround, and expected release fix for this issue. Can you provided this information?

predogma avatar Dec 05 '23 17:12 predogma

@predogma no timeline on any universal fix. Multiple paths are being explored from:

  • Improving the circuit breaker and how it calculates when it should circuit break
  • Using less memory and creating less garbage in certain hot-paths (e.g. _search, _bulk)

The work around if they have this issue is to use JDK17.

We have something that can help coming in 8.12 & 8.11.2:

  • https://github.com/elastic/elasticsearch/pull/102396

benwtrent avatar Dec 05 '23 19:12 benwtrent

hi @ldematte,

You are right @ewolfman: due to bad timing (updating docs at the same time 8.11.1 underwent an emergency release) the known issue has not been picked up by 8.11.1 release notes. Thanks for spotting this, I have backported this manually.

Same question about the documentation for 8.11.3?

ewolfman avatar Jan 02 '24 08:01 ewolfman

Hi @ewolfman, I think it is responsibility of the release manager to write release notes for the versions they release, so if the release notes are incorrect/are missing something we should contact them. That said, 8.11.2 and 8.11.3 include https://github.com/elastic/elasticsearch/pull/102396, which should greatly mitigate the issue; I'm wondering if we still need to mention the issue in the release notes (unless we observe problems again with these versions, of course).

ldematte avatar Jan 02 '24 09:01 ldematte

Thanks @ldematte.

Please note that the comment does appear in 8.11.2, and there is no fix mentioned in 8.11.3. So it is difficult to understand whether the issue still exists in 8.11.3 or not. If it was fixed, I suggest mentioning in the 8.11.3 release notes. If it was not fixed, I think this warning should still be there. For example this issue is one that prevents us from upgrading from version 7.17.7 to latest 8, and this is why it is important to understand what the status is.

ewolfman avatar Jan 02 '24 09:01 ewolfman

Hi @ewolfman, I agree with you that it should be either one or the other (either keep mentioning the known issue in 8.11.3 or mention that the issue is fixed). It seems like a mistake to not have one or the other in the release notes for 8.11.3

ldematte avatar Jan 02 '24 09:01 ldematte

Hi @ldematte

Thanks for your help on this issue. May I have a quick confirmation please? Given https://github.com/elastic/elasticsearch/issues/99592#issuecomment-1873758600,

That said, 8.11.2 and 8.11.3 include https://github.com/elastic/elasticsearch/pull/102396, which should greatly mitigate the issue; I'm wondering if we still need to mention the issue in the release notes (unless we observe problems again with these versions, of course).

Can we safely say that, "this issue is considered to be fixed in 8.11.2+ and 8.12.0+ versions" (*1)? We want to make it clear that, "greatly mitigate" + "removal from release notes" can be considered as "fixed" or not, or mitigation might solve or reduce the problem for some usages/use cases but not others depending on the specific pattern (i.e. partially fixed).

(*1) By saying +, we meant to say "8.11.2 and onwards" and "8.12.0 and onwards" versions.


cc @geekpete

kunisen avatar Jan 25 '24 01:01 kunisen

Is it also fixed in 7.17.17?

While not listed as explicitly fixed (& obv this issue is still in an 'Open' state), the 7.17.17 release notes drop this "Known issue" when compared to the 7.17.16 release notes.

Was this purposeful to indicate the issue no longer affecting version 7.17.17?

mix4242 avatar Jan 29 '24 14:01 mix4242

Is it also fixed in 7.17.17?

While not listed as explicitly fixed (& obv this issue is still in an 'Open' state), the 7.17.17 release notes drop this "Known issue" when compared to the 7.17.16 release notes.

Was this purposeful to indicate the issue no longer affecting version 7.17.17?

Also looking for clarification on this as it's not clear if the issue is resolved or if the "Known issue" was just left off from the notes.

Robert-Saiter avatar Feb 09 '24 17:02 Robert-Saiter

I tried the repro but could not get it to hit heap pressure for me in the same sized test env on 8.10.2, the heap stayed well below max no matter how many times I stopped/started the ML model. My aim was to first confirm a repro on an earlier affected version (I picked 8.10.2) then upgrade to the latest stack version and test the repro again to see if it still hit the issue.

@jonathan-buttner are you able to re-check if your repro still affects latest stack version that might help confirm or deny any delivered fix?

geekpete avatar Feb 14 '24 03:02 geekpete

@geekpete

Were you testing locally or in cloud? In my experience it was much easier to reproduce in cloud. I'll spin up a 8.10.2 cloud environment and check again. The other thing to note is that ML has addressed the issue by reducing the chunk size that we store. That doesn't mean the problem is "fixed" just that it isn't reproducible using the ML mechanism. That ML fix should have gone into 8.10.3 and 8.11.0:

https://github.com/elastic/elasticsearch/pull/99677

jonathan-buttner avatar Feb 14 '24 13:02 jonathan-buttner

Yeah was testing in cloud. I guess another workload that could be similar for large amounts of short lived objects is aggregations? Or ad hoc searches across a long retention/many indices is another scenario I saw that hit the impact for another user.

geekpete avatar Feb 14 '24 13:02 geekpete

I was able to get this to happen in cloud on 8.10.0

2x 4gb ES nodes 2x 4gb ML nodes

When starting the deployment try 2 allocations and 2 threads.

image

image

image

image

jonathan-buttner avatar Feb 14 '24 14:02 jonathan-buttner

I guess another workload that could be similar for large amounts of short lived objects is aggregations? Or ad gov searches across a long retention/many indices is another scenario I saw that hit the impact for another user.

Hmm, I'm not sure. The test with ELSER is essentially storing around ~100 documents that are like 4 mb each. Maybe creating some dummy data and writing a script to repeatedly search for them.

Another option could be downloading ELSER in 8.10.0 (or pre the 1MB chunk fix that went into 8.10.3 and 8.11.0) and then upgrade to the latest stack version. That way you'll have the data stored in the large documents and can test via the same process.

jonathan-buttner avatar Feb 14 '24 14:02 jonathan-buttner