puppet-elasticsearch
puppet-elasticsearch copied to clipboard
The parameters secrets and init_defaults defined in an instance lead to a loop
- Module version: 6.3.3
- Puppet version: 4.10 and 5.5
- OS and version: RedHat 7.5 and CentOS 7.3
Bug description
When I configure an instance specifying the options secrets (to manage the file elasticsearch.keystore) and init_defaults, I get the following error:
Error: Found 1 dependency cycle:
(Augeas[defaults_sws-test] => Elasticsearch_keystore[sws-test] => Elasticsearch::Service[sws-test] => Elasticsearch::Service::Systemd[sws-test] => Augeas[defaults_sws-test])\nTry the '--graph' option and opening the resulting '.dot' file in OmniGraffle or GraphViz
Error: Failed to apply catalog: One or more resource dependency cycles detected in graph
The file cycles.dot is attached.
The workaround is to use init_defaults_file in order to enter the if branch in the following code snippet from manifests/service/systemd.pp starting at row 104 (Module version 6.3.3)
# Defaults file, either from file source or from hash to augeas commands
if ($init_defaults_file != undef) {
...
} else {
augeas { "defaults_${name}":
...
It would be nice to be able to use init_defaults since in my use case it's easier to provide a hash rather than uploading a file to the master.
Here's a manifest that leads to the problem
node elasticsearch1.mydomain {
$config_hash = {
'ES_HEAP_SIZE' => '1g',
}
$instance_hash = {
'purge_secrets' => true,
'secrets' => {
'cloud.aws.access_key' => 'abc',
},
}
create_resources('elasticsearch::instance', {'test-me' => $instance_hash})
class {'::elasticsearch':
version => '6.3.2',
security_plugin => 'x-pack',
status => 'unmanaged',
init_defaults => $config_hash,
restart_config_change => true,
}
}
If the parameter restart_config_change is set to false then it will work as expected. Just to sum up, it seems to me that restart_config_change = false and a defined secrets hash create the loop.
Another workaround is to define the keystore values yourself, and not rely on the module.
elasticsearch_keystore { $name :
configdir => "/etc/elasticsearch/${name}/",
purge => false,
settings => $es_secrets_hash,
notify => Elasticsearch::Service[$name],
}
The module does a lot of checking, which is where the dependency loop comes in. Setting restart-after-config to false is not viable in our environment, so this is getting us the secrets functionality back.
I wasn't aware there was an issue open for this. Whilst testing Elastic 7.x support, I fell foul of this issue.
As @ml14tlc correctly identified, in manifests/service/systemd.pp, this augeas resource is called:
augeas { "defaults_${name}":
incl => "${elasticsearch::defaults_location}/elasticsearch-${name}",
lens => 'Shellvars.lns',
changes => template("${module_name}/etc/sysconfig/defaults.erb"),
before => Service["elasticsearch-instance-${name}"],
notify => $notify_service,
}
And as @sysadmin1139 noticed, elasticsearch_keystore resource causes the loop. This is due to the name of the augeas resource in this provider.
lib/puppet/type/elasticsearch_keystore.rb
autorequire(:augeas) do
"defaults_#{self[:name]}"
end
If the instance is called es-01, then two resources will be called augeas { 'defaults_es-01': } and cause the loop.
My simple fix was the rename the resource in elasticsearch_keystore to ks_defaults_#{self[:name]}. I have added this to my Elastic 7.x PR.