hdm
hdm copied to clipboard
Ability to run locally with Docker
As a new HDM user, I would like to have a very prescribed way to put my hiera data in a specific location, start a container and begin using HDM.
I am interested in helping developers run this locally against their checked out control repos so they can make contributions and the prescribed work flow with docker would allow this. Would also make it much easier to start using it against your own data to evaluate the software.
We already discussed this internally and postponed this scenario to a later release as we saw lots of discussion.
At the moment HDM requires a fully deployed puppet code base using r10k or code manager and access to PuppetDB.
Available environments are read from PuppetDB and filesystem. Node facts and their last used environment are read from PuppetDB. This allows us to parse the hiera.yaml file and identify used hierarchy files.
If we run on local control-repo: How should HDM get nodes and their facts so it is able to render the hiera.yaml hierarchies?
Comment from @oneiros : for testing we use the fake puppetdb service and we use local data. We can check if this is a way also for local development.
To elaborate on this:
If I understand #154 and this correctly, you want to run hdm locally on static files (your control repo for hiera data and json/yaml files with the nodes' facts).
This can be achieved today without Docker roughly in this way:
- Clone the hdm repository and follow the instructions for a manual, non-Docker installation.
- Copy your environments' files from the control repository to
test/fixtures/files/puppet/environments/<environment_name>
. - For every node place the output of
facter -p -y
into a filetest/fixtures/files/puppet/nodes/<fqdn>_facts.yaml
- Run
bin/fake_puppet_db
and edit yourconfig/hdm.yml
to use it instead of puppetdb:puppet_db: server: http://localhost:8083
This would "repurpose" the fake puppetdb service we created for development and testing. If this is a viable use case for hdm I think we could improve two things:
First we could improve the fake puppetdb service to take a configuration option setting the directory where it looks for files. This way, you would not have to mix your development data with our test data.
The second thing would be to improve the situation for Docker users. I am no expert here and thus not sure what would be convenient and reasonably easy to implement. Maybe a second Dockerfile
for the fake puppetdb service? And maybe a docker compose file?
@ghoneycutt any feedback from your side?