heads
heads copied to clipboard
Qubes fwupd upstream
@tlaurion
How is heads currently versioned? In the QubesOS script https://github.com/3mdeb/qubes-fwupd/blob/master/src/qubes_fwupd_heads.py#L91 we check for the current version and want to download an update from LVFS only if the version is more recent. We can see that the latest tag is from 2017 though. We have used the LOCALVERSION field to inert the version information for testing purposes.
The Python script drops the update file to the boot partition. Then, the flash-gui.sh script was extended to look for updates there in the first place.
The update process is described here: https://github.com/3mdeb/qubes-fwupd/blob/master/doc/heads_udpate.md
@macpijan Heads is versioned per commit ID (as of now) which is inserted at build time under /etc/config https://github.com/osresearch/heads/blob/master/Makefile#L105 https://github.com/osresearch/heads/blob/master/Makefile#L596
Let me know if we should change something
@macpijan I see that the script searches for a ROM inside a known path. Maybe we should implement a hook inside of /boot integrity validation?
Normal boot codepath entry point for /boot integrity validation
verify_global_hashes() local fonction
check_config imported global fonction
So that if a new firmware is dropped in path, verify_global_hashes could see that the past ROM that was user signed changed, and then check ROM signature against heads imported distinguished distro key (oem_distro_keys instead)?
Heads is versioned per commit ID (as of now) which is inserted at build time under /etc/config
@tlaurion Currently we are parsing the Heads version from the fwupdmgr hwids:
https://github.com/fwupd/fwupd/blob/7fc7da3999a3bb0d7e75b4e72993783d82836329/src/fu-tool.c#L1424
Currently, the qubes-fwupd is looking for the line:
BiosVersion: CBET4000 <version> heads
The version number is set in the LOCALVERSION variable. If there is no version tag, it assumes that bios is older than the one that exists in the lvfs.
Let me know if we should change something.
In our case, the easiest to handle would be some release cycle with the version numbers. Every release should be uploaded to the lvfs.
@Asiderr can you pinpoint to experts, documentation so that we can move this forward?
My understanding here is that we have to change heads rolling release to a commited time based release, where CircleCI would need to push artifacts back into github, where measured matching (reproducible hashes for different hosts/OS builds) builds would be pushed.
I have no idea right now how to push that change forward, and do not have the rights over Heads repo to do such changes where @osresearch didn't replied to messages sent over slack on the matter after the opening of that PR.
Intuition says that changing manually the version under coreboot config for each board is not the way to go. I'm clueless on what steps needs to be taken here so that
-
[x] this first step is done (versioning)
-
[ ] then how those versioned artifacts would then be pushed back to github releases for that version, automatically.
Opened an issue a long time ago on that matter. Will edit when found again.
@Asiderr I also understand that (actions needed, in short)
-
[ ] some fwupd code needs to be upstreamed to fwupd @hughsie
-
[ ] some other qubesos related code needs to be upstreamed to QubesOS @marmarek
-
[ ] versioning scheme of heads needs to change and CIs configs changed to uploads versions are pushed back to github to be picked up and pushed to lvfs (@Asiderr + @osresearch + @macpijan discussion?)
@tlaurion
can you pinpoint to experts, documentation so that we can move this forward?
The latest documentation about the heads update can be found here: https://github.com/3mdeb/qubes-fwupd/blob/master/doc/heads_udpate.md
Here is the class that handles the heads update process: https://github.com/3mdeb/qubes-fwupd/blob/master/src/qubes_fwupd_heads.py
Intuition says that changing manually the version under coreboot config for each board is not the way to go.
You're right, it should be automated somehow.
some fwupd code needs to be upstreamed to fwupd some other qubesos related code needs to be upstreamed to QubesOS
The source code of the wrapper should be placed in the fwupd/fwupd repo. Packing scripts should be upstreamed to the QubesOS.
versioning scheme of heads needs to change and CIs configs changed to uploads versions are pushed back to github to be picked up and pushed to lvfs
That's right.
@Asiderr
The source code of the wrapper should be placed in the fwupd/fwupd repo. Packing scripts should be upstreamed to the QubesOS.
You do the PRs?
@tlaurion We're discussing the way of the upstream now: https://groups.google.com/g/fwupd/c/u5JEIQO_rp8 Once the PRs are created, I'll link them.
linked to #839 #859
@Asiderr important question on FWUPD https://github.com/osresearch/heads/pull/859#issuecomment-711289062
@Asiderr
So following merge of https://github.com/osresearch/heads/pull/859/files#diff-18936189b28399cf48703d0c1ec1df33e57c559de2a12f4438be00e6813bdb68R40-R41
We now have, injected in coreboot config automatically for each built board:
And the filenames for final produced rom images can be seen, and downloaded for testing of FWUPD usage of CONFIG_MAINBOARD_SMBIOS_PRODUCT_NAME here in ~2 hours from now (since heads modules changes, triggering a new CicrlceCI build only on musl-cross-make CircleCi cache)
Any comment? Next steps will happen under https://github.com/osresearch/heads/issues/571 to:
- Determine next steps to create releases
- Build artifacts and be able to compare hashes.txt for same board from different hosts (docker images)
- Define a process to confirm reproducibility from there
- Define how LVFS images will get uploaded from there
- Any other consideration i'm missing here.
@tlaurion Naming and versioning look fine and I'll include the changes in the wrapper.
Define how LVFS images will get uploaded from there
For testing purposes, I'll put manually the cabinet archives on the LVFS. But it's temporary and we have to ask @hughsie if there is any possibility to automate the upload process (I don't see any information in the documentation on this topic).
is any possibility to automate the upload process
We have robot users that are able to upload automatically; Dell uses this to automate firmware being uploaded to the LVFS as it's built in their pipeline. https://gitlab.com/fwupd/lvfs-website/-/blob/master/contrib/example.py has an example upload if that helps.
@macpijan Any news?
@tlaurion Sorry for the delay. Our plan is to go back to it next week and finish by the end of the year.
@macpijan any updates and links would be greatly appreciated!
@Asiderr Please provide links
@tlaurion @macpijan The qubes-wrapper has been merged recently in the fwupd tree: https://github.com/fwupd/fwupd/pull/2710 I can see that https://github.com/osresearch/heads/issues/789 is solved, thus we can swap to the Qubes R4.1. Now, we should focus on the release process, and we should bring the Heads binaries to the LVFS.
@Asiderr @macpijan There is https://github.com/QubesOS/qubes-issues/issues/6792 blocking while a test rom seem to resolve all sandy/ivy bridge issues.
I now have a testing platform to test QubesOS 4.1 deployment on.
Contacted @osresearch to have current Heads repo on a release, with binaries coming from CircleCI uploaded to that github release following https://circleci.com/blog/publishing-to-github-releases-via-circleci/
Ping @osresearch
@tlaurion I updated flash-gui.sh and now it supports current Heads versioning (https://github.com/osresearch/heads/pull/859). Also, I added necessary changes to the fwupd (https://github.com/fwupd/fwupd/pull/4745).
I created a Qubes OS CI in fwupd that generates binary packages, and we are currently discussing the possibility of upstreaming to YUM and APT (fwupd/fwupd#4744).
The last thing we need to think about is the release process of the Heads. The Heads binaries must be uploaded to the LFVS.
The Heads binaries must be uploaded to the LFVS
Can you use the API? https://lvfs.readthedocs.io/en/latest/upload.html#automatic-uploads