dev-sec.github.io
dev-sec.github.io copied to clipboard
trigger rebuilds of the site on updates to `master` of tracked baselines
- webhook triggers from apache-baseline.
-
inspec json #{apache.links.name.match?(^Inspec\s*).url} --reporter json:<trigger.id>.json
- cp -f <trigger_id>.json ./data/`
- push a PR from the bot
- Travis Job does the rest with https://github.com/dev-sec/dev-sec.github.io/pull/23
@aaronlippold I don't get the exact use case here, can you maybe explain it?
Hi, apologies I should have explained it more clearly. My thinking is that the site has a set of baselines that is tracking. When one of those baselines changes we would like to automatically trigger a PR to regenerate The source json for that profile and redeploy the site. I am wondering if we can use webhooks to watch for commits to master on each of the Baseline GitHub repos and when a commit occurs on the master branch of one of the bass lines a PR is automatically generated for the website and Travis rebuilds and deploys the site. That way we don't have to do it manually when we update let's say the Apache baseline or the Linux Baseline.
On Tue, Dec 11, 2018, 10:01 AM Artem Sidorenko <[email protected] wrote:
@aaronlippold https://github.com/aaronlippold I don't get the exact use case here, can you maybe explain it?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dev-sec/dev-sec.github.io/issues/24#issuecomment-446233189, or mute the thread https://github.com/notifications/unsubscribe-auth/ABauaOkzGW3FLXCjwKDbn83mGA0md4v_ks5u38jWgaJpZM4ZMclW .
@artem-sidorenko does that make more sense?
@aaronlippold I configured travis to run the job weekly.

I usually do that in order to notice broken pipelines directly and not some months later. But here it might have a positive side effect: what about integration of this json update logic to the CI job of site? So we would get always up-to-date information on the weekly base and the entire implementation would be much more simpler
Yes, I think that would work overall - unless one of the teams is pushing updates to the baselines more than once a week so think may be a bit out of sync.
In thinking about it, I actually realise that if we were to trigger a job off the baselines, it should be off a 'new tagged release' or whatever the 'workflow' is for release of baselines is in dev-sec. You may actually push changes to master but not bundle them into a release.
So, for now, I would say the weekly regen would be fine - what I would hope we could get to is a trigger on the documented release process for the community as a second step.
We can also change it to daily or consider the triggers, but maybe let's start with MVP first. What would be the next step?
hi - I would say that we do a small script to process the baselines.yml
file and grab all the inspec github links and loop through that to create the inspec profile json files and copy them into the data dir and create a PR. That could be either a scheduled task at first or added to the deploy stage.
Also if a 'new' profile is discovered then we would have to add a file in the content
directory etc. and even an addition to the overview.yml
as we move forward.
I would say the central source of truth
is the baseline.yml
file - which we may want to actually build a yaml schema around and use that to drive all the scripts/automation.
What do you think?
hi - I would say that we do a small script to process the baselines.yml file and grab all the inspec github links and loop through that to create the inspec profile json files and copy them into the data dir and create a PR
I like everything, but not the create a PR
:-D
Why not just generating the jsons on-the-fly without a PR? We would have baseline.yml
as central source of truth, everything else is generated based on this file.
If we avoid the PR, we would almost have the entire thing automated and self-publishing/updating
Hi, I just thought the Travis job operated off of PRs that was the only reason I suggested it. And also for traceability such that we know what changed... updated profile x per automated scanning or whatever we want to call it. But if that's Overkill no worriesthe automation pattern is what we really want to hit so sounds good.
On Tue, Dec 18, 2018, 2:43 PM Artem Sidorenko <[email protected] wrote:
hi - I would say that we do a small script to process the baselines.yml file and grab all the inspec github links and loop through that to create the inspec profile json files and copy them into the data dir and create a PR
I like everything, but not the create a PR :-D
Why not just generating the jsons on-the-fly without a PR? We would have baseline.yml as central source of truth, everything else is generated based on this file. If we avoid the PR, we would almost have the entire thing automated and self-publishing/updating
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/dev-sec/dev-sec.github.io/issues/24#issuecomment-448345333, or mute the thread https://github.com/notifications/unsubscribe-auth/ABauaHxP8PytPT7N4EDjxALS820wfZ2kks5u6UV7gaJpZM4ZMclW .