rsyslog-pkg-ubuntu
rsyslog-pkg-ubuntu copied to clipboard
Develop some type of CI/QA for the packages
Ideally, we should have some system to automatically test at the generated packages routinely (once a day would be nice for the daily ones). How this exactly can be done needs to be found out.
A rough idea (but nothing more) is along these lines - for a minimal system:
- use Travis CI scheduled builds (gives us Ubuntu 14.04 testing only, but at least that)
- add ppa
- try to install components from it, check that apt succeeds
- we could then possibly run an adpated rsyslog testbench (more work to do)
Even without testbench it would be better than what we have today. Let's reach for the low hanging fruit, then build on that. It would be great if we could find someone willing to help setting such a thing up.
@rgerhards What do you think of using Docker containers for this? The steps you've already noted would probably be enough to catch issues such as #73.
Probably a set of containers would do:
- Ubuntu 14.04
- Ubuntu 16.04
- Ubuntu 17.10
and then in a few months a container for Ubuntu 18.04 could be added.
Ideally, we should have some system to automatically test at the generated packages routinely (once a day would be nice for the daily ones). How this exactly can be done needs to be found out.
@rgerhards GitHub Actions supports scheduled events:
https://help.github.com/en/actions/reference/events-that-trigger-workflows#scheduled-events-schedule
Any interest in using this? I've got some experience setting up Action Workflows for some of my projects and could take a stab at this if you'd be willing to use it. I recall reading somewhere that you prefer to be hosting provider agnostic when possible, but with the reference to Travis I assume that you don't have a strong preference here.
Any interest in using this?
definitely interested in it! Any samples, cooperation would be appreciated.
Any interest in using this?
definitely interested in it! Any samples, cooperation would be appreciated.
Great, I'll try taking a stab at this soon and will report back. Once the PPA GHAW is working properly a similar job could probably be setup for OBS.
@rgerhards Still working on this, but so far, so good:
https://github.com/atc0005/rsyslog-pkg-ubuntu/actions/runs/132537854
Going to replace use of sudo apt install
with sudo apt-get install
(let a few slip by) and also capture the output from systemctl status rsyslog
as an additional item after installing and restarting rsyslog.
I'll squash the commits and submit a PR probably later today or tomorrow.
The file can be found here (for now):
atc0005/rsyslog-pkg-ubuntu/.github/workflows/install-rsyslog-packages-from-ppa.yml@98d7b0f41cca501011d4e04e9a066bca2bb94aee
@rgerhards I made those changes and some additional. Switched the timing to hourly, included Ubuntu 20.04 in the mix. By the time you read this the job should have run a number of times more and you'll be able to get a sense for how the output would collect.
I initially had daily or even every 4 hours as a schedule target, but the whole set completes very quickly; the last run completed in 2 minutes, 7 seconds. Even so, I left the other options commented out so that I could easily switch the timing to whatever you prefer. It may be worth leaving an alternate "test" value staged in comments so that you can switch out the schedule if/when you need to troubleshoot the process.
Already it looks like including Ubuntu 20.04 surfaced at least one unexpected item:
E: Unable to locate package rsyslog-mmjsonparse
##[error]Process completed with exit code 100.
Could be a false-positive though related to an earlier failure.
This was with installing all of the packages in one "block" vs separate installation commands (one per package). I'm going to go ahead and modify the workflow to use that approach for both jobs.
refs: https://github.com/atc0005/rsyslog-pkg-ubuntu/actions/runs/132678652
@friedl can you pls have a look at the issue
@atc0005 thx - this looks very good. I admit I had only a short glimpse and do not yet fully understand of whats going on, but it looks very useful. I guess we need to discuss quite a bit of things :-)
If I understand correctly, we could also easily use OBS as a test target. Also I assume we can also test CentOS and Fedora? Note that I myself am currently very focused on OBS and only very occasionally worked on the old system with PPA (which I am far from fully understanding).
Reading https://github.com/actions/virtual-environments/issues/45 I wonder if we could work something along rsyslog's buildbot CI environment. But granted, testing with docker is not fully comprehensive (no real system startup or systemd involved) and adding full VM on-demand creation would possibly cause quite some work on the buildbot front. Any good compromse? I mean testing on Ubuntu is better than nothing, but it's really a small subset, especially from the enterprise PoV...
Already it looks like including Ubuntu 20.04 surfaced at least one unexpected item:
E: Unable to locate package rsyslog-mmjsonparse ##[error]Process completed with exit code 100.
I eventually worked out the syntax and the Ubuntu 20.04 jobs are running entirely separate from the Ubuntu 16.04 and 18.04 jobs allowing those to run to completion. Thankfully the chosen settings don't "mask" the results from the Ubuntu 20.04 job runs allowing us to see that they failed and to some degree why they failed.
Latest example as of this writing (changed the timing in the last commit):
https://github.com/atc0005/rsyslog-pkg-ubuntu/runs/765022335?check_suite_focus=true
The complaint this time was regarding another package not being found.
I booted up a local LXD Ubuntu 20.04 container, added the ppa:adiscon/v8-devel
repo and then proceeded to try and install the packages that the GitHub Actions Workflow attempts to handle, but didn't make it far. Though it appears that the PPA is "registered", apt-cache policy rsyslog
seems to illustrate that the PPA is not being consulted. When I repeat the process, but this time adding ppa:adiscon/v8-stable
packages from that PPA are installed.
Seems that something is up with the adiscon/v8-devel
PPA specific to Ubuntu 20.04.
Reading actions/virtual-environments#45 I wonder if we could work something along rsyslog's buildbot CI environment. But granted, testing with docker is not fully comprehensive (no real system startup or systemd involved) and adding full VM on-demand creation would possibly cause quite some work on the buildbot front. Any good compromse? I mean testing on Ubuntu is better than nothing, but it's really a small subset, especially from the enterprise PoV...
You are right, GitHub Actions limits the virtual Linux environments to Ubuntu. I imagine it works for a lot of use cases (including the scope of this one repo), but not for testing packages intended for other distros.
I've been using LXD containers for quick, local testing and really like how lightweight they are and how they attempt to emulate a full VM environment. It doesn't sound like they would be an option within the environments provided by GitHub Actions, but maybe for buildbot use they would be.
Maybe start with GitHub Actions for this repo to test package installation from PPA and OBS, sort out the kinks there and then work on buildbot for other distros.
Once the buildbot setup is stable, potentially either retire the GitHub Actions setup here or leave it running in parallel.
I booted up a local LXD Ubuntu 20.04 container, added the
ppa:adiscon/v8-devel
repo and then proceeded to try and install the packages that the GitHub Actions Workflow attempts to handle, but didn't make it far. Though it appears that the PPA is "registered",apt-cache policy rsyslog
seems to illustrate that the PPA is not being consulted. When I repeat the process, but this time addingppa:adiscon/v8-stable
packages from that PPA are installed.Seems that something is up with the
adiscon/v8-devel
PPA specific to Ubuntu 20.04.
I forgot to add: I'll leave the Workflow running with its current schedule of every 15 minutes in case your team wishes to test resolution of the PPA "not registering" (for lack of a more appropriate description) properly with Ubuntu 20.04. Once that is sorted it should be picked up in the next scheduled Workflow run and provide the results under the Actions tab of my fork. I can also go ahead and clean up the Workflow file and submit as a PR here if you'd like to get it merged in at the current state, or @friedl can copy the existing file and test in another fork. Whatever works for you guys.
Also, I went ahead and added a Workflow for installing from OBS. It hasn't been tested yet, but should run shortly. I'll check-in on it later to see if it had issues and if minor, will work to correct them today.
@atc0005 I guess the key point is that we get to some script which cleverly uses docker containers to do as many checks as possible. That way we could integrate into Travis and/or buildbot. We already do some parts of that with e.g. the clang static analyzer. I guess we can borrow ideas from the github actions to generate these scripts.
The bottom line is that we cannot do a full startup and functional test of rsyslog/the package via docker (we can get systemd running, so we may get to "half the real thing", but we would really need on-demand VMs to do this). Better go for the lower-hanging fruit, at least for now.
Quick update: The OBS Workflow is working now. Ubuntu 16.04 didn't like the official setup directions, so I modified the process somewhat to pipe the signing key into apt-key add
instead of dropping the file into the trusted keys location (Ubuntu 18.04+ was fine with this, but not 16.04). Now all three supported LTS editions trust the OBS repo and are passing.
@friedl can you pls have a look at the issue
I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.
The 16.04 install on the other hand works fine, because that one completed for the 8,2006.0~ build.
If you feel there is value, I can go ahead and submit the work done thus far in a cleaned-up PR to this repo (likely tomorrow). While it won't provide the necessary support for other related repos (e.g., the CentOS/RHEL package repo), hopefully it will be useful for monitoring the packages generated from this repo?
If you do see some value, at what frequency would you like to run the jobs? Once daily, 4 hours?
@atc0005 I agree it helps in any case, so it makes sense to go forward. If I understand correctly, this job runs on the repo, so it catches both daily and scheduled stable build. Then I would think once a day is sufficient.
Question: how do we get error notification? Requires it polling the github project (TBH, this will not work very well) or is there any way to obtain push notifications?
@rgerhards: I agree it helps in any case, so it makes sense to go forward.
Great, I'll prepare a Pull Request then. I had hoped to get it done today, but that hasn't worked out. I'll try to get this in tomorrow.
If I understand correctly, this job runs on the repo
It can run anywhere you like, but it may make sense to run it here in this repo just so you can reference the results. GitHub offers "badges" that display the results of the last job (or in this case "jobs"), so right on the main README you could display whether the OBS and PPA package jobs are failing or passing. The badges could link directly to the latest results for each.
so it catches both daily and scheduled stable build. Then I would think once a day is sufficient.
If we use the current GitHub Actions Workflows I drafted then both of those builds will be tested. I'll update them to use a daily schedule before submitting the PR.
Question: how do we get error notification? Requires it polling the github project (TBH, this will not work very well) or is there any way to obtain push notifications?
There are multiple ways to be notified:
- Web UI
- https://github.com/notifications
- Email: Pass & Fail or just Fail
- see https://github.com/settings/notifications
Screenshot of the settings:
This is of my personal account settings. I don't recall if there is a GitHub Organization-wide setting.
I thought that there was Webhook support for GitHub Actions based on prior reading, but I may have overlooked it when I checked just now.
@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.
Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario.
Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?
@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.
Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario.
Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?
Hi all,
Just looping back to see if you saw my last response and if you have any further feedback. Worst case I can just squash the commits for what I have and submit as-is with further tweaks via follow-up PRs.
@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.
Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario. Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?
Hi all,
Just looping back to see if you saw my last response and if you have any further feedback. Worst case I can just squash the commits for what I have and submit as-is with further tweaks via follow-up PRs.
@rgerhards This is the path I ended up going. I attempted to cleanup the work I've done on the branch and tested in my fork and have submitted #104 for review/consideration.
As previously discussed this doesn't cover anything aside from this repo, but perhaps having the workflows execute here with updated the status badges on the README will make the changes worth including.
@atc0005 sorry for the delay, as you possibly have noticed I was busy on a related effort, getting the repo to a PR based workflow. Now working on the integration of your work. I guess we can also look into how to merge this all together into a PR based "check-before-merge" style of pipeline.
@rgerhards: sorry for the delay
I understand. Only so much time in the day!
I guess we can also look into how to merge this all together into a PR based "check-before-merge" style of pipeline.
GitHub Actions supports that as well. From what I remember, you can set multiple on
triggers (probably not official description) where you want a Workflow to run. PR #104 triggers on a schedule, but you can also trigger on pull requests.
For some of my personal projects I have build/linting tasks run when PRs are opened and when the linked branch is updated (force-pushed or fast-forward).
Relevant YAML block:
# Run builds for Pull Requests (new, updated)
# `synchronized` seems to equate to pushing new commits to a linked branch
# (whether force-pushed or not)
on:
pull_request:
types: [opened, synchronize]
I believe that you can combine the two if desired to have the same Workflow run on a schedule and on specific events.
Once you are happy with the Workflows, you can set them as required within the GitHub repo configuration and merges will be blocked unless the Workflow runs pass (unless you allow administrators to override the status checks and merge anyway).
@atc0005 just want to point you to the buildbot-based CI I am setting up in case you haven't yet seen it. This is a tester (title is no longer correct, bug is under investitigation, it's an OBS problem):
https://build.rsyslog.com/#/builders/239/builds/36
relevant part of buildbot config is here:
https://github.com/rsyslog/buildbot-config/blob/master/master/master_include_rsyslogpkg.py#L101
@atc0005
I believe that you can combine the two if desired to have the same Workflow run on a schedule and on specific events.
I think the workflow obviously needs to be adapted so that it takes newly built packages from the PR. This currently looks a bit of a problem to me (maybe we can setup a pure testing only repo for this case, but at least it sounds somewhat complicated...). Not keen of the idea to pull them in via local files (for reproduction).
Currently this looks like a mayor step / stumbling stone to me. I guess it's irrelevant if it is buildbot or github actions in this regard (with buildbot we may be able to do some more magic on the builder machine, though).
@rgerhards: maybe we can setup a pure testing only repo for this case
I see what you mean. You'd have to build the code, the packages, have a test repo in place to host the packages and then test installing from that repo. Sounds pretty brittle.
You can however install packages locally after building them though via apt-get install ./path/to/filename
, so while it might not check all of the boxes, just building the package and installing it would be a step forward?
It might almost be worth thinking of the goal as a series of milestones and shoot for the easiest first, then iterate off of the results (which is likely how you're already looking at this).
Packing building is outside of my current skillset however, so I am likely overlooking a lot of blockers.
Tangent: Is there a primary/consolidated list of available packages somewhere? I'm thinking in terms of how to keep the new GitHub Action Workflow files updated with newer packages as they become available (or are phased out).
I'm seeing a scenario like with the official project's changelog: merging a PR there requires a fairly quick follow-up update to the changelog so that the PR changes are reflected.
You can however install packages locally after building them though via
apt-get install ./path/to/filename
, so while it might not check all of the boxes, just building the package and installing it would be a step forward?
I think we can create a local repository and install from there. This should be very close to the "real thing", especially when it comes to dependency resolution. Looking at this.
It might almost be worth thinking of the goal as a series of milestones and shoot for the easiest first, then iterate off of the results (which is likely how you're already looking at this).
yup, that's what I am aiming at
Packing building is outside of my current skillset however, so I am likely overlooking a lot of blockers.
Please also have a look at https://github.com/rsyslog/rsyslog-pkg-ubuntu/pull/117 - comments and suggestions are appreciated. Fighting with caching ATM. This resembles what I already do on buildbot. But actions has greater potential.
Tangent: Is there a primary/consolidated list of available packages somewhere? I'm thinking in terms of how to keep the new GitHub Action Workflow files updated with newer packages as they become available (or are phased out).
Sounds useful, but is not yet there. I have begun to work on the packaging projects and there is a lot of movement ATM. I guess it makes sense to compile a list once it is stable. Help is always appreciated :-)
Sounds useful, but is not yet there. I have begun to work on the packaging projects and there is a lot of movement ATM. I guess it makes sense to compile a list once it is stable. Help is always appreciated :-)
The only thought I had regarding automating the package list is to setup a GitHub Action that clones the repo and treating one of the package list files (I forget what they're called) as authoritative, checks that list against the current GitHub Actions Workflows. This would help ensure that the daily installation of available packages stays in sync with the actual available packages.
Please also have a look at #117 - comments and suggestions are appreciated. Fighting with caching ATM.
Take a look at these links:
- https://docs.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow#using-a-github-hosted-runner
- https://docs.github.com/en/actions/reference/virtual-environments-for-github-hosted-runners
You can specify the runner type for each job in a workflow. Each job in a workflow executes in a fresh instance of the virtual machine. All steps in the job execute in the same instance of the virtual machine, allowing the actions in that job to share information using the filesystem.
In short, I think you'll need to setup different jobs and not one job with many steps. That (based on those docs) should provide a fresh environment for each build.
In short, I think you'll need to setup different jobs and not one job with many steps. That (based on those docs) should provide a fresh environment for each build.
as there is a lot of common work to do, I think it is useful to have it in one job. But multiple jobs may get us better concurrency :) maybe it's best to combine both and share the common artifact - will give it a try. Thx!