[CI] MultiClusterSpecIT class failing
Build Scans:
- elasticsearch-intake #11478 / 8.17.0_bwc-snapshots
- elasticsearch-intake #11478 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37255 / 9.0.0_bwc-snapshots
- elasticsearch-intake #11477 / 8.17.0_bwc-snapshots
- elasticsearch-intake #11475 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37250 / 8.17.0_bwc-snapshots
- elasticsearch-intake #11472 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37237 / 8.17.0_bwc-snapshots
- elasticsearch-pull-request #37237 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37149 / 8.17.0_bwc-snapshots
Reproduction Line:
./gradlew ":x-pack:plugin:esql:qa:server:multi-clusters:v8.17.0#bwcTest" -Dtests.class="org.elasticsearch.xpack.esql.ccq.MultiClusterSpecIT" -Dtests.method="test {enrich.ShadowingWithAliasLimit0}" -Dtests.seed=BC2B9151E0C19B77 -Dtests.bwc=true -Dtests.locale=se -Dtests.timezone=America/St_Lucia -Druntime.java=22
Applicable branches: main
Reproduces locally?: N/A
Failure History: See dashboard
Failure Message:
org.elasticsearch.client.ResponseException: method [HEAD], host [http://[::1]:42973], URI [/airports], status line [HTTP/1.1 503 Service Unavailable]
Issue Reasons:
- [main] 28 failures in class org.elasticsearch.xpack.esql.ccq.MultiClusterSpecIT (2.9% fail rate in 970 executions)
- [main] 7 failures in step 8.17.0_bwc-snapshots (3.3% fail rate in 213 executions)
- [main] 10 failures in step 9.0.0_bwc-snapshots (2.2% fail rate in 448 executions)
- [main] 11 failures in step 8.16.0_bwc-snapshots (5.2% fail rate in 211 executions)
- [main] 4 failures in pipeline elasticsearch-intake (3.4% fail rate in 119 executions)
- [main] 19 failures in pipeline elasticsearch-pull-request (5.4% fail rate in 355 executions)
Note: This issue was created using new test triage automation. Please report issues or feedback to es-delivery.
This has been muted on branch main
Mute Reasons:
- [main] 24 failures in class org.elasticsearch.xpack.esql.ccq.MultiClusterSpecIT (2.5% fail rate in 961 executions)
- [main] 8 failures in step 9.0.0_bwc-snapshots (1.8% fail rate in 443 executions)
- [main] 5 failures in step 8.17.0_bwc-snapshots (2.4% fail rate in 209 executions)
- [main] 11 failures in step 8.16.0_bwc-snapshots (5.2% fail rate in 211 executions)
- [main] 2 failures in pipeline elasticsearch-intake (1.7% fail rate in 116 executions)
- [main] 18 failures in pipeline elasticsearch-pull-request (5.1% fail rate in 353 executions)
Build Scans:
- elasticsearch-intake #11475 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37250 / 8.17.0_bwc-snapshots
- elasticsearch-intake #11472 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37237 / 8.17.0_bwc-snapshots
- elasticsearch-pull-request #37237 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37149 / 8.17.0_bwc-snapshots
- elasticsearch-pull-request #37149 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37077 / 9.0.0_bwc-snapshots
- elasticsearch-pull-request #37063 / 8.17.0_bwc-snapshots
- elasticsearch-pull-request #37049 / 8.17.0_bwc-snapshots
Pinging @elastic/es-analytical-engine (Team:Analytics)
This looks environmental.
The whole test suite was muted, so I think we need to take another look and see how we can make this less flaky. If there's no other reason why this fails more often now than it did before.
There's a bunch of different errors in the test runs. Most are connection refused or node not connected - I wonder, why do we get so many disconnects of these all of a sudden?
This has been fixed, unmute in https://github.com/elastic/elasticsearch/pull/115218