helpdesk
helpdesk copied to clipboard
Add a real-world job to weekly.ci.jenkins.io
Service(s)
weekly.ci.jenkins.io
Summary
As mentioned in the UX chat (https://matrix.to/#/!HKutvjxPnajVyCNLhF:matrix.org/$fszL0imTFY3sM1zbQEMhDrkNErAN7V5xCV9NDIqibI4?via=gitter.im&via=matrix.org&via=mozilla.org), it would be helpful for users and for our development team, if our weekly.ci.jenkins.io instance would not only have some simple dummy job results. It would be much more impressive if we would have a job that contains real data of a bigger project history.
I set up such a history showcase for the warnings plugin: https://github.com/uhafner/warnings-ng-plugin-devenv. This creates a job that has several builds that show the charts of the junit, coverage and warnings plugins:
There are two possibilities to make these results visible:
- Import and run the job under https://github.com/uhafner/warnings-ng-plugin-devenv/tree/main/docker/images/jenkins-controller/preconfigured-jobs/history-coverage-model
- Extract a zipped build and job folder with the precomnputed XML files after the Jenkins instance has been set up.
Reproduction steps
No response
This sort of thing would be great as it means we can see and debug any changes in a weekly that's just gone out.
Hi ! Sorry to be late to the party, the Jenkins infra team is a bit busy.
We have this really old issue (https://github.com/jenkins-infra/kubernetes-management/pull/3993) which is the same kind of request than here: we all want to have a public visible demonstrator.
TL;DR;, we have to move weekly.ci.jenkins.io to its own isolated VM instead of running it in Kubernetes.
Because it is public facing and could be abused (same as ci.jenkins.io). So we want to avoid it to have any credential or any lateral attack if a malicious attacker can run shells in the controller (or its eventual agents).
By doing so, we could start a safer process:
- Add 1 or 2 agents (using docker containers in the same machine but with different users)
- Add data such as proposed above
- Reset the controller once a day or a week (with a deployment process which would generate / copy the data, cleanup the data disk and start from zero)
Delaying to May
This issue is back to triage as the infra team does not have bandwidth to treat it in May. We most probably will restart working on it in June.