puppetlabs-docker
puppetlabs-docker copied to clipboard
docker_stack always redeploying stack
Describe the Bug
While deploying a server using the puppetlabs-docker module, whenever the puppet agent runs the entire stack is redeployed as a corrective change. Possible duplicate of https://github.com/puppetlabs/puppetlabs-docker/issues/629
Expected Behavior
Docker stack should deploy once and only once unless an actual change occurs
Steps to Reproduce
docker_stack code is as follows:
docker_stack { 'black-duck':
ensure => present,
name => 'hub',
compose_files => ["${bd_dir}/hub-${bd_version}/docker-swarm/${compose_file}", "${bd_dir}/hub-${bd_version}/docker-swarm/docker-compose.local-overrides.yml"], require => [ Class['docker'], Archive['bd_release_package'] ],
subscribe => [ File["${cert_dir}/smartrg_com_cert.combined"], File["${cert_dir}/smartrg.com.key"], File["${bd_dir}/hub-${bd_version}/docker-swarm/docker-compose.local-overrides.yml"] ],
}
agent debug output with interesting duplicated line:
Info: Checking for stack hub
Debug: Executing: '/bin/docker ps --format {{.Label "com.docker.swarm.service.name"}}-{{.Image}} --filter label=com.docker.stack.namespace=hub'
Debug: Executing: '/bin/docker ps --format {{.Label "com.docker.swarm.service.name"}}-{{.Image}} --filter label=com.docker.stack.namespace=hub'
Info: Running stack hub
Debug: Executing: '/bin/docker stack deploy -c /opt/black_duck/hub-2022.4.0/docker-swarm/docker-compose.yml -c /opt/black_duck/hub-2022.4.0/docker-swarm/docker-compose.local-overrides.yml hub'
Notice: /Stage[main]/Cloud_build::Black_duck/Docker_stack[black-duck]/ensure: created (corrective)
Debug: /Stage[main]/Cloud_build::Black_duck/Docker_stack[black-duck]: The container Class[Cloud_build::Black_duck] will propagate my refresh event
Full log attached agentdebug_redacted.log
Environment
- Problem existed in v4.1.2 of puppetlabs-docker module and remained after updating to v4.4.0
- Docker 20.10.12 on Centos 7.9
I did some research and testing on this and poked around the puppet code, and I think I found the problem.
First off, the duplicated line is because the check to see if the stack exists is run for each compose file in the list. I'm not sure if this is the right way to do the check, but it at least answers that. I also ran the command that's in the Debug line doing the check and got an error. It looks like the shell is having a problem with the space in the format string {{.Label "com.docker.swarm.service.name"}}-{{.Image}}
and is not finding the stack name in the error output, which why the check for the hub stack is failing. I tried the command again with quotes around the format string (i.e., '{{.Label "com.docker.swarm.service.name"}}-{{.Image}}'
) and got plenty of output with the stack name.
I'm not a great ruby dev, but I think the fix would be to surround the format string in quotes in the puppet code. That should (should) get the stack check working and cause the check to succeed or fail as it's supposed to. I hope this helps!
The same (or similar) issue is also present in the docker_compose
type. Although in our case, it only affects 2 out of 4 docker compose stacks on the same host.
I tried the proposed solution from https://github.com/puppetlabs/puppetlabs-docker/issues/848#issuecomment-1197347774, but it didn't solve the issue for me (even made it worse that all 4 compose stacks are now refreshing on each puppet run.
In case of docker_compose
puppet only reports that the stack was updated, but no containers were restarted.
Environment:
- Debian 10, Puppet 6.28.0 from Puppet repo
- Docker version 20.10.17
- Docker Compose version v2.10.2
- puppetlabs/docker 5.0.0
The problem is what @dharwood mentioned in https://github.com/puppetlabs/puppetlabs-docker/issues/848#issuecomment-1197347774, but that causes a further issue when it comes time to do the service-to-container comparison (at least for the docker_compose provider, which is what I looked at), so you need to update that as well. This works for me:
--- a/lib/puppet/provider/docker_compose/ruby.rb
+++ b/lib/puppet/provider/docker_compose/ruby.rb
@@ -34,7 +34,7 @@ Puppet::Type.type(:docker_compose).provide(:ruby) do
containers = docker([
'ps',
'--format',
- "{{.Label \"com.docker.compose.service\"}}-{{.Image}}",
+ "'{{.Label \"com.docker.compose.service\"}}-{{.Image}}'",
'--filter',
"label=com.docker.compose.project=#{name}",
]).split("\n")
@@ -49,7 +49,7 @@ Puppet::Type.type(:docker_compose).provide(:ruby) do
counts = Hash[*compose_services.each.map { |key, array|
image = (array['image']) ? array['image'] : get_image(key, compose_services)
Puppet.info("Checking for compose service #{key} #{image}")
- [key, compose_containers.count("#{key}-#{image}")]
+ [key, compose_containers.count("'#{key}-#{image}'")]
}.flatten]
# No containers found for the project
I can submit a PR when I have a minute but if anyone has a better solution that would also be fine. I still haven't figured out why this only affects some of my compose stacks and not others.
Above suggested fix has been implemented: https://github.com/puppetlabs/puppetlabs-docker/pull/878
PR containing the above suggested fix has been merged, release began
Sorry for the delay, but the fix for this has just been released so gonna close this issue. Feel free to reopen however if the issue continues to persist.