[question] CI Triggering and Branch development
Hey,
Trying to implement an automated pipeline using conan, Jenkins, and artifactory. I've already read a bunch of your blog posts trying to cobble something together that works for us, but seem to be missing some triggering mechanics based on how our repos are organized.
What I've read: https://blog.conan.io/2017/03/14/Devops-and-Continouous-Integration-Challenges-in-C-C++-Projects.html https://jenkins.io/blog/2017/07/07/jenkins-conan/ https://docs.conan.io/en/latest/versioning/lockfiles.html
Repo1 AppA (Dependencies: LibA, LibB) AppB (Dependencies: LibA, LibC) LibA (Dependencies: LibB)
- Jenkinsfile (Multi-branch pipeline, no scm trigger)
LibB
- Jenkinsfile (Multi-branch pipeline, no scm trigger)
Jenkinsfile (Multi-branch pipeline, scm trigger)
Repo2 LibC
- Jenkinsfile (Multi-branch pipeline, no scm trigger)
Assumptions/Desires:
- On push to Repo1 remote, run pipeline that builds based on what changed, resulting in in new top level binaries AppA and AppB.
- We specifically want one top level job that triggers all other needed builds. It's important for that commit to be associated with one build for our review gate process.
- It's common for devs to change multiple libraries on a branch and within a single commit.
- We will likely be using package_revision_mode. All of our libraries are internal so version numbers stay at 1.0.0 so we don't have to constantly update conanfiles
- Since our work is organized by branches, we typically want the latest from the branch we are on and would like a "fall back to master" behaviour if that library doesn't need to be built on your current branch.
- We want each library to have its own history for metric collection, so having a single parameterized pipeline that everything calls wont work. This is solved by creating a Multi-Branch pipeline for each library, but disabling the scm trigger (so we can meet assumption 2)
- Our CM team needs to be able to recreate builds, and I assume the lockfile will have a role here.
- We are building for a couple different architectures, which we encapsulate via profiles
- We are using branch name as the channel, e.g. LibA/1.0.0@dummy/master
Questions
- We started off using the 'conan info build-order' from the 7-7-17 blog post, but I see that is now deprecated in favor of the graph build-order. Given assumption 3, and that your lockfile doc shows that you create the lockfile, build a package, then call build-order, there doesn't seem to be a way to know which changed library to go build first. If I changed LibA and LibB in a single commit and have to build one first to get the build order, how do I correctly pick the first to build to prevent duplicate builds? If I change LibA and LibB, I am going to 'conan create' both, then the build order for LibB will probably tell me to go build LibA again. Can I get a build order without building anything?
- I create a new issue branch TASK-1 for development and commit. How does any dependency chain get created at all if the conan files/requires are all looking at the master channel?
Below is my Repo1 scm triggered jenkinsfile attempt, but I don't think the algorithm is right. I know it doesn't work, and I haven't made any attempt to pass the lockfile around to children jobs so they can update it.
def data
def conf_repo_dir
def client
def changed_projects
pipeline {
options {
skipDefaultCheckout()
}
parameters {
string(name: 'conf_repo_branch', defaultValue: 'master', description: 'Conan config repository branch')
}
stages {
stage("Determine Changed Projects"){
steps {
checkout scm
script {
def cmd = "git diff --name-only origin/master...origin/${env.BRANCH_NAME}"
if ( "${env.BRANCH_NAME}" == "master")
{
cmd = "git diff-tree --no-commit-id --name-only -r ${env.GIT_COMMIT}"
}
changed_projects = sh (label: "Get commit files", script: cmd + " | grep '/' | grep -v tools | cut -d'/' -f1 | sort --unique", returnStdout: true).split()
echo "Changed projects: ${changed_projects}"
}
}
}
stage("Configure/Get repositories"){
steps {
script {
def conan_home = "${workspace}".toString() + "/conan_home"
client = Artifactory.newConanClient userHome: conan_home
def server = Artifactory.newServer url: "<server>/artifactory", credentialsId: 'ci_account'
client.remote.add server: server, repo: "conan", force: true
client.run command: "config install <conan settings repo>.git --args \"--branch " + params.conf_repo_branch + "\""
data = readYaml file: "${conan_home}/.conan/conan_ci_conf.yml"
}
}
}
stage("Build") {
steps {
script {
def user = "dummy"
def apps = entries(data.apps)
withEnv(["CONAN_USERNAME=${user}", "CONAN_CHANNEL=${env.BRANCH_NAME}"]){
for (int i = 0; i < apps.size(); i++) {
def app = apps.get(i)
def app_name = app[0];
def profiles = app[1]["profiles"];
def app_ref = app_name + "/1.0.0@" + user + "/${env.BRANCH_NAME}"
def repo = data.repos.get(app_name)
def profiles_build_order = [:]
create_lockfiles(repo.dir, profiles, client)
for (int j = 0; j < changed_projects.size(); j++) {
def project_name_version = changed_projects[j] + "/1.0.0"
def package_ref = project_name_version + "@" + user + "/${env.BRANCH_NAME}"
def package_repo_url = data.repos.get(project_name_version).url
def package_repo_recipe_dir = data.repos.get(project_name_version).dir
profiles_build_order = get_build_order_for_project(profiles_build_order,
package_ref,
changed_projects[j],
profiles,
"${env.BRANCH_NAME}",
package_repo_url,
package_repo_recipe_dir,
repo.dir,
client)
}
def tasks_groups = get_tasks_groups(profiles_build_order)
echo "Running in parallel: ${tasks_groups}"
launch_task_group(tasks_groups, params.conf_repo_branch)
}
}
}
}
}
}
}
def create_lockfiles(recipe_dir, profiles, client) {
dir(recipe_dir){
for(profile in profiles){
client.run(command: "graph lock ./conan_package --lockfile " + profile + ".lock")
}
}
}
def get_build_order_for_project(profiles_build_order, package_ref, project_name, profiles, repo_branch, repo_url, recipe_dir, lock_file_dir, client) {
dir("_build_" + project_name + "_repo"){
echo "Getting project recipe '${project_name}' ${repo_branch} ${repo_url}"
git branch: repo_branch, credentialsId: 'ci_account', url: repo_url
dir(recipe_dir){
for(profile in profiles){
def lock_file = "../../" + lock_file_dir + "/" + profile + ".lock"
client.run(command: "create ./conan_package " + package_ref + " --profile " + profile + " --lockfile " + lock_file)
client.run(command: "graph build-order " + lock_file + " --json=build_order.json --build=missing")
def bo_json = readJSON file: "./build_order.json"
profiles_build_order[profile].addAll(bo_json["groups"])
echo "Build order for recipe '${package_ref}' is " + profiles_build_order
}
}
}
return profiles_build_order
}
def get_tasks_groups(profiles_bo){
// Merge all the profiles groups to run in parallel, first group from profile1 with first group from profile2, second
// with second and so on
def tasks_groups = []
def group_index = 0
while(true){
tasks = []
profiles_bo = entries(profiles_bo)
for (int i = 0; i < profiles_bo.size(); i++) {
def profile = profiles_bo[i]
if(profile[1].size() <= group_index){
continue;
}
for(ref in profile[1][group_index]){
string name_p = ref.split("@")[0]
string channel = ref.split("@")[1].split("/")[1]
string task_name = ref + "_" + profile[0]
string prof_name = profile[0].split("/").last()
tasks.add([build_label: ref + " (${prof_name})",
ref: ref,
name_version: name_p,
channel: channel,
profile: profile[0]])
}
}
if(tasks.size() == 0){
break
}
tasks_groups.add(tasks)
group_index += 1
}
return tasks_groups
}
def launch_task_group(tasks_groups, conf_repo_url, conf_repo_branch){
// Runs the tasks parallelizing
for(int i=0; i<tasks_groups.size(); i++){
tasks = [:]
for(int j=0; j<tasks_groups[i].size(); j++){
def a_build = tasks_groups[i][j]
def label = a_build["build_label"]
echo "BUILD: ${a_build}"
def project = a_build["name_version"].split("/")[0]
tasks[label] = { -> build(job: "${project}/${env.BRANCH_NAME}",
parameters: [
string(name: "build_label", value: label),
string(name: "branch", value: a_build["channel"]),
string(name: "name_version", value: a_build["name_version"]),
string(name: "profile", value: a_build["profile"]),
string(name: "conf_repo_url", value: conf_repo_url),
string(name: "conf_repo_branch", value: conf_repo_branch)
]
)
}
}
parallel(tasks)
}
}
@NonCPS
def entries(m) {
m.collect {k, v -> [k, v]}
}
Hi @Rogue-14,
Thanks a lot for sharing your developments with Conan and ci.
You can get the build order without building anything if you do a conan export instead of a conan create, then generate the graph lock and ask for the build order, It could be something like this:
> cd LibA && conan export . LibA/1.0@dummy/master
> cd ..
> cd LibB && conan export . LibB/1.0@dummy/master
> cd ..
> cd AppA && conan export . AppA/1.0@dummy/master
> conan graph lock App/1.0@dummy/master --lockfile=release
> conan graph build-order ./release --json=bo.json --build=missing
Regarding associating the git branches with the channel maybe you could try to re-export the affected recipes with the other branch name as a channel to test it. We are currently investigating this kind of problem (it has a lot of priority in our roadmap) to make some general recommendations on how to address ci-workflows. Maybe you can check https://github.com/conan-io/conan/issues/5553 issue where you will find some more useful information. Hope this helps :)
A new section "CI tutorial" is being added to the docs in https://github.com/conan-io/conan/issues/6589 that will close this ticket. Feedback welcome.
This ticket has been closed by https://github.com/conan-io/docs/pull/3799, (at the moment in the develop2 branch, but it will be published live soon). Please take a look, and don't hesitate to open new tickets for any new question or issue. Many thanks for your feedback.