Introduce a new workflow to publish automatically new merged changes into main a :latest
Changes
graph TD
A[Push to main branch] --> B[Checkout Code]
B --> C[Set Git Commit Timestamp]
C --> D[Set up QEMU]
D --> E[Set up Docker Buildx]
E --> F[Login to DockerHub]
F --> G[Build and Push Multi-Arch Docker Image]
G -->|linux/amd64| H([Create Image: nspanelmanager/nspanelmanager:latest])
G -->|linux/386| H
G -->|linux/arm64| H
G -->|linux/arm/v7| H
G -->|qemu-arm| H
G -->|qemu-aarch64| H
Note: I added https://reproducible-builds.org/docs/source-date-epoch/ (it's pretty interesting we can insert the "build time" in an ENV variable and seems it is a standard, we could then reuse it as part of the "help" UI to publish some bug reports etc in the future) At the moment I added a TODO because it tags everything
:testso that we can ensure it works as expected safely. (Have to test the multiple architecture images are produced, and that it works as expected)
Sorry for the delay. We are missing one important step in the workflow, to replace placeholders of version number in the files, as per already existing workflow:
- run: | cp docker/web/nspanelmanager/web/templates/footer_template.html docker/web/nspanelmanager/web/templates/footer.html sed -i 's/%version%/${{ github.ref_name }}/g' docker/web/nspanelmanager/web/templates/footer.html cp docs/tex/manual.pdf docker/web/nspanelmanager/manual.pdf
Okay,thank you for the notice I will integrate that
Also, one thing that's missing here that I forgot to mention (but is probably not something we should implement just know) is the final step to update the docker/config.yaml and docker-beta/config.yaml file to point to the latest release when the publish has finished.
Hmm, we are now getting the error as below but I'm not quite sure why. This is not something I've ever seen before.
#74 [linux/386 stage-1 14/15] RUN pip install -r requirements.txt # Install python packages
#74 20.80 Installing build dependencies: finished with status 'done'
#74 20.81 Getting requirements to build wheel: started
#74 23.26 Getting requirements to build wheel: finished with status 'error'
#74 23.26 error: subprocess-exited-with-error
#74 23.26
#74 23.26 × Getting requirements to build wheel did not run successfully.
#74 23.26 │ exit code: 1
#74 23.26 ╰─> [47 lines of output]
#74 23.26 Compiling src/gevent/resolver/cares.pyx because it changed.
#74 23.26 [1/1] Cythonizing src/gevent/resolver/cares.pyx
#74 23.26 performance hint: src/gevent/libev/corecext.pyx:1357:0: Exception check on '_syserr_cb' will always require the GIL to be acquired.
#74 23.26 Possible solutions:
#74 23.26 1. Declare '_syserr_cb' as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
#74 23.26 2. Use an 'int' return type on '_syserr_cb' to allow an error code to be returned.
#74 23.26
#74 23.26 Error compiling Cython file:
#74 23.26 ------------------------------------------------------------
#74 23.26 ...
#74 23.26 cdef tuple integer_types
#74 23.26
#74 23.26 if sys.version_info[0] >= 3:
#74 23.26 integer_types = int,
#74 23.26 else:
#74 23.26 integer_types = (int, long)
#74 23.26 ^
#74 23.26 ------------------------------------------------------------
#74 23.26
#74 23.26 src/gevent/libev/corecext.pyx:69:26: undeclared name not builtin: long
#74 23.26 Compiling src/gevent/libev/corecext.pyx because it changed.
#74 23.26 [1/1] Cythonizing src/gevent/libev/corecext.pyx
#74 23.26 Traceback (most recent call last):
#74 23.26 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
#74 23.26 main()
#74 23.26 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
#74 23.26 json_out['return_val'] = hook(**hook_input['kwargs'])
#74 23.26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#74 23.26 File "/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
#74 23.26 return hook(config_settings)
#74 23.26 ^^^^^^^^^^^^^^^^^^^^^
#74 23.26 File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
#74 23.26 return self._get_build_requires(config_settings, requirements=[])
#74 23.26 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#74 23.26 File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires
#74 23.26 self.run_setup()
#74 23.26 File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 317, in run_setup
#74 23.26 exec(code, locals())
#74 23.26 File "<string>", line 54, in <module>
#74 23.26 File "/tmp/pip-install-_mz2o1rz/gevent_b68f469440c14a27b8b8fe401569bd86/_setuputils.py", line 249, in cythonize1
#74 23.26 new_ext = cythonize(
#74 23.26 ^^^^^^^^^^
#74 23.26 File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
#74 23.26 cythonize_one(*args)
#74 23.26 File "/tmp/pip-build-env-izotxjpp/overlay/lib/python3.11/site-packages/Cython/Build/Dependencies.py", line 1298, in cythonize_one
#74 23.26 raise CompileError(None, pyx_file)
#74 23.26 Cython.Compiler.Errors.CompileError: src/gevent/libev/corecext.pyx
#74 23.26 [end of output]
#74 23.26
#74 23.26 note: This error originates from a subprocess, and is likely not a problem with pip.
#74 23.27 error: subprocess-exited-with-error
I think I figured it out though I'm not quite sure on how to test it. The error above (src/gevent/libev/corecext.pyx:69:26: undeclared name not builtin: long) I coming from the fact that, when building for i386/x86 the data type long is simply not declared in python. The package that is trying to use long is gevent which is only used as a dependency for conan which in it self is only used when building the MQTTManager binary during the first stage of the Dockerfile.
Previously we've used the home assistant builder tool to build the images and when using that it sends along the argument BUILDPLATFORM which can be used to together with the --platform argument in the FROM command in the Dockerfile to specify that even if an image is being built for i386/x86 (in this case, could be arm or whatever) this particular stage in the Dockerfile should simply run the native architecture of the host system (almost always x86_64) and on that architecture the data type long is declared. Doing this also has a massive performance uplift as emulating ARM on an x86_64 processor is stupid slow, therefor if we can keep doing as previously and cross-compile the MQTTManager on a native architecture we will save a lot of time and solve this issue.