[BUG] Opensearch does not respect "discovery.type=single-node" setting during index creation
Describe the bug
Opensearch was launched using the following docker-compose.yml (derived from https://opensearch.org/docs/latest/install-and-configure/install-opensearch/docker/#sample-docker-composeyml) on Debian 12 privileged LXC container:
version: '3'
services:
opensearch: # This is also the hostname of the container within the Docker network (i.e. https://opensearch-node1/)
image: opensearchproject/opensearch:latest # Specifying the latest available image - modify if you want a specific version
container_name: opensearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true # Disable JVM heap memory swapping
- plugins.security.system_indices.enabled=false
- "OPENSEARCH_JAVA_OPTS=-Xms4096m -Xmx4096m" # Set min and max JVM heap sizes to at least 50% of system RAM
- OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_INITIAL_ADMIN_PASSWORD} # Sets the demo admin user password when using demo configuration, required for OpenSearch 2.12 and later
ulimits:
memlock:
soft: -1 # Set memlock to unlimited (no soft or hard limit)
hard: -1
nofile:
soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
hard: 65536
volumes:
- /root/data:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
ports:
- 9200:9200 # REST API
- 9600:9600 # Performance Analyzer
networks:
- opensearch-net # All of the containers will join the same Docker bridge network
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:latest # Make sure the version of opensearch-dashboards matches the version of opensearch installed on other nodes
container_name: opensearch-dashboards
ports:
- 5601:5601 # Map host port 5601 to container port 5601
expose:
- "5601" # Expose port 5601 for web access to OpenSearch Dashboards
environment:
OPENSEARCH_HOSTS: '["https://opensearch:9200"]' # Define the OpenSearch nodes that OpenSearch Dashboards will query
networks:
- opensearch-net
networks:
opensearch-net:
In my understanding in a single node installation the replica count should be set to 0 for all indices by default, but this doesn't happen:
Related component
Indexing
To Reproduce
- Launch from the docker-compose.yml
- Check the Index Management/Indexes page
- There're unhealthy indices
Expected behavior
All indices should be healthy (in green state) by default
Additional Details
Host/Environment (please complete the following information):
- OS: Debian 12
- Version 2.16
@jazzl0ver, Thanks for the filing the issue, Are you referring to the security audit logs index to not become green on single node cluster?
Adding security label to validate, if we can change the replica count of the security audit index to zero for single node cluster.
@opensearch-project/admin can we please move this issue to opensearch-project/security ?
@dhwanilpatel it's not only about the security audit index. ISM history is also affected
[Triage] Thank you for filing this issue @jazzl0ver . I think it makes sense to add a setting to specify the number of replicas on index creation around here: https://github.com/opensearch-project/security/blob/main/src/main/java/org/opensearch/security/auditlog/sink/InternalOpenSearchSink.java#L84-L86
Similarly, ISM history index has a setting where you can control the number of replicas.
Another affected index family is this one - .opendistro-alerting-alert-history-*
https://github.com/opensearch-project/alerting/blob/6239d7b665a98efcee7a1001e12a215d9ed53be4/alerting/src/main/kotlin/org/opensearch/alerting/alerts/AlertIndices.kt#L376-L378
Please, fix
Another affected index family - .opendistro-anomaly-results-history-*
Another affected index family - .opendistro-anomaly-results-history-*
This index is fixed now https://github.com/opensearch-project/anomaly-detection/pull/1362
This also happens for .opendistro-ism-config and .opendistro-job-scheduler-lock.
However, only if plugins.security.system_indices.enabled: true
OpenSearch version: 2.19.0
Hi there.
Same problem here :(.
_cat/shards
...
systemd-2025.03.10 0 p STARTED 11823 31.5mb 100.96.3.8 ofd-data-0
.kibana_1923286954_developer_1 0 p STARTED 53 100.1kb 100.96.3.8 ofd-data-0
.ql-datasources 0 p STARTED 0 208b 100.96.3.8 ofd-data-0
.opendistro-ism-config 0 p STARTED 100.96.3.8 ofd-data-0
.opendistro-ism-config 0 r UNASSIGNED
.opendistro_security 0 p STARTED 9 60.2kb 100.96.3.8 ofd-data-0
.kibana_1 0 p STARTED 53 49.9kb 100.96.3.8 ofd-data-0
kmsg-2025.03.10 0 p STARTED 7418 1mb 100.96.3.8 ofd-data-0
nginx-2025.03.10 0 p STARTED 356 501.3kb 100.96.3.8 ofd-data-0
.opensearch-observability 0 p STARTED 0 208b 100.96.3.8 ofd-data-0
.plugins-ml-config 0 p STARTED 1 4kb 100.96.3.8 ofd-data-0
.opensearch-sap-log-types-config 0 p STARTED 100.96.3.8 ofd-data-0
containers-2025.03.10 0 p STARTED 102131 32.2mb 100.96.3.8 ofd-data-0
.opendistro-job-scheduler-lock 0 p STARTED 4 31.4kb 100.96.3.8 ofd-data-0
.opendistro-job-scheduler-lock 0 r UNASSIGNED
...
Indices .opendistro-ism-config and .opendistro-job-scheduler-lock are in yellow state on fresh installed multirole node (single node cluster).
Strange thing is that the problem not exist in upgrade scenario (v2.15.0 -> v2.19.1).
Edit:
In our provisioning script we injecting index template for both ISM indices .opendistro-ism-config and .opendistro-job-scheduler-lock, BUT it seems, that the templates are ignored. Provisioning script is started after security-admin script.
GET _index_template/opendistro-ism-config
{
"index_templates": [
{
"name": "opendistro-ism-config",
"index_template": {
"index_patterns": [
".opendistro-ism-config"
],
"template": {
"settings": {
"index": {
"number_of_shards": "1",
"number_of_replicas": "0" <<<
}
}
},
"composed_of": []
}
}
]
}
GET /.opendistro-ism-config/_settings
{
".opendistro-ism-config": {
"settings": {
"index": {
"replication": {
"type": "DOCUMENT"
},
"hidden": "true",
"number_of_shards": "1",
"provided_name": ".opendistro-ism-config",
"creation_date": "1741682853784",
"number_of_replicas": "1", <<<
"uuid": "SgvSMCzDTkO2wWL3ZJsklw",
"version": {
"created": "136407927"
}
}
}
}
}
Of course, when the protection for system indices are active (plugins.security.system_indices.enabled: true) it isnt possible to change the indices settings on-the-fly. (workaround with changing indices settings with shared environment isnt possible)
Have anybody any clue, when this unpleasant bug can be fixed? It seems, that it start with 2.16 and still exist in 2.19. ...
Hi, we are facing the same issue, currently at 3.0.0
Facing the issue with .opendistro-job-scheduler-lock. Is there a known workaround?
Is there a known workaround?
Of course, when the protection for system indices are active (plugins.security.system_indices.enabled: true) it isnt possible to change the indices settings on-the-fly. (workaround with changing indices settings with shared environment isnt possible) (see above)
So, workaround is to turn off temporary system indices protection via the mentioned switch, restart OpenSearch node and next change the indices settings (something like this). After change the indices will have only one primary shard (zero replicas) and you can again turn on system indices protection and restarting the node to apply.
Unpleasant, but as a workround ...