Pinto
Pinto copied to clipboard
control pinto from jenkins
Hi
more of a feature request
I've been spending lots of time splitting our code into modules with a view to manage deployment through pinto. I'm now at the point where I can start loading the modules into pinto.
I've been using jenkins to manage all this and would like a pinto plugin for jenkins. Such things as artifacts to see which versions are in pinto, creating repositories and adding packages in. I know I can do this with shell scripts but jenkins offers much more clarity. Or even a way of automating installing and running all the tests on a pristine install
I'm quite tempted to have a go myself
Jeremy
Hi Jeremy-
For the time being, the pinto
utility is the only public interface. So any Jenkins plugin would probably just shell out. But that's no reason not to make a plugin.
Eventually, I want to add hooks so that you could fire off a new build whenever the repository changes. In theory, you could do that now by using the stacks
or log
command to poll the repository and look for a change.
Integrating Pinto with CI systems definitely makes a lot of sense. I'm a bit preoccupied with Stratopan.com right now, but if you take the lead on this, I'd be happy to merge your work into Pinto.
Jeremy: ping me please on the #pinto channel to discuss what you have in mind and how it is I could help.
I too want to build a jenkins-pinto integration.
hesco
Hi I thought about an svn type server listening at the client end to serve information and action changes but its another piece to manage. So the simplest way is to shell out for the calls
The two workflows I need straight away are
-a way to load freshly built packages into a stack. This is probably going to be a new stack based on the jenkins job number eg integration_server-10 as I don't want to break prior runs by overwriting entries and I want to make sure all my packages build before attempting it
-a way to check out and install all entries in a stack into a clean environment away from any development modules then run the tests. We have a lot of interdependencies here but I'm hoping pulling one package will force the dependencies to pull too as I went to some trouble to trace our dependencies
I'm about to build a shell based version without a plugin
I was also thinking a generic shell output parser (like Expect) and try to decouple the pinto requirements into a separate layer
I have managed to get the example jenkins plugin working but thats about as far as I have got so far. Its all maven and java which isn't exactly my top skills
Jeremy
I looked about a bit and there is a scriptTrigger plugin which will use a shell script as its mechanism for rebuilding
https://wiki.jenkins-ci.org/display/JENKINS/ScriptTrigger+Plugin
Which is what I was thinking of doing
Do you just need a way of knowing when the repository (or a specific) stack has changed?
I have always hoped to add support some kind of hook scripts. For example, you could install a post-commit hook that fires of a build, or hits a URL, or does whatever you want. Would that be helpful?
In my mind, with Pinto serving as a code repository, I would want to
pinto add
the make-dist results of a commit which successfully builds
and passes tests I set up on the jenkins server; and then cpanm
install from the pinto repo the specific artifact for subsequent steps
of my build pipeline.
This will require setting up appropriate credentials on the jenkins server to do the pinto add part, setting it up to do so as an appropriate user.
The other element required here is for pinto to be able to serve the requested artifact, not just the latest version.
I'm pretty sure that Pinto already knows how to do both of these things.
The idea behind a deployment pipeline is that code commits which pass the tests from earlier elements of the pipe are built into re-usable and versioned artifacts which are used in subsequent components of the build pipeline. Artifacts derived from ealier steps, triggered by passing tests need not be rebuilt or re-unit-tested, but instead can form the basis of a clean build to support exploratory testing or user acceptance testing.
Not sure there is any new code needed on the pinto side to have it serve as an artifact repository to support a CI build server. I would be willing to write a how-to on getting this setup in this manner. In fact, I am already making a presentation at YAPC::NA next week on 'Continuously Deploying the Camel' in which I intend to highlight the use I have begun making of pinto to support my deployment processes.
http://www.yapcna.org/yn2013/talk/4673
http://campaignfoundations.com/blog/hesco/Continuously_Integrating_the_Camel
-- Hugh Esco
On Mon, 27 May 2013 23:40:31 -0700 Jeffrey Ryan Thalhammer [email protected] wrote:
Do you just need a way of knowing when the repository (or a specific) stack has changed?
I have always hoped to add support some kind of hook scripts. For example, you could install a post-commit hook that fires of a build, or hits a URL, or does whatever you want. Would that be helpful?
Reply to this email directly or view it on GitHub: https://github.com/thaljef/Pinto/issues/73#issuecomment-18533653
Hugh Esco 404-424-8701 YourMessageDelivered.com Keeping Your Group in the Loop No Matter How Large or How Small
This is the challenge that I've had using CI...
Without a DarkPAN, the CI server usually just builds & tests the code in your repo using whatever modules it has on hand. Most of the time, those are just installed manually, possibly in a local::lib that the CI server uses for the build. It's nice and fast because it only builds & tests your code. But it is not clean because the modules available to the CI server are not necessarily the ones you will deploy with.
With a DarkPAN (like Pinto), you can use the CI server to make a build with exactly the right modules. This is much cleaner, but it is also much slower. So you probably don't want to do it all the time (Pinto takes 15 minutes to build from scratch). But you still want feedback from the CI system as soon as possible, so you need to do something on every commit.
Ideally, you'd like to have a quick build that you run on every commit, and then a longer more comprehensive build that you run periodically (like hourly or daily). But there is a chicken-and-egg problem. The quick and dirty build doesn't always use the "right" modules. So if you commit changes to you dependencies, then you may get spurious failures in the quick build. And if the long build is contingent on the quick build passing, then it won't happen.
So, one approach is to have the long build update the modules that the quick build uses (when it succeeds). That way, the quick build environment stays relatively close to the long build, which is (hopefully) identical to your target deployment environment. However, there is still a lag because the quick build happens more often then the long one. But since dependency changes are relatively infrequent, it might be tolerable.
Everyone has a different approach to CI though. That's just the experience that I've had.
I'm visiting a potential client next week and I would love to show them how Pinto could work with Jenkins.
Have either of you made any progress on this? Any new ideas?
I've done no new work on that of late, but I can outline my next steps:
(1) use the tool where a successful build triggers the next build;
(2) create a new build called 'Add Artifact to Repository';
(3) make it an ad-hoc build;
(4) use the text box provided, provide a #!/bin/bash shebang line,
then write in bash a make dist
and a pinto add
command using
a token to send the correct version to the repo server;
(5) have the script return true so it triggers the next step in the deployment pipeline.
On another front, I plugged pinto in my talk Monday here at YAPC, as well as a few other times since then.
Do you have a jenkins server to test this with? I'm starting a vacation as I leave YAPC::NA Saturday and will not have time to work on this going forward until my return to work on the 20th of June. But if you need, perhaps I could set you up with an account on our Jenkins installation in the mean time.
Let me know.
-- Hugh Esco
Date: Fri, 07 Jun 2013 17:29:30 -0700 From: Jeffrey Ryan Thalhammer [email protected] Reply-To: thaljef/Pinto [email protected] To: thaljef/Pinto [email protected] Cc: Hugh Esco [email protected] Subject: Re: [Pinto] control pinto from jenkins (#73)
I'm visiting a potential client next week and I would love to show them how Pinto could work with Jenkins.
Have either of you made any progress on this? Any new ideas?
Reply to this email directly or view it on GitHub: https://github.com/thaljef/Pinto/issues/73#issuecomment-19139823
Hi
What I had in the end was a job per module for our own code which did a standard perl test cycle
A job to load pinto with all our modules into a new/cloned repository
A job to deploy all the modules
My plan was to create a 'release candidate' which would then be locked to test against. So each run would create a new repository. I persuaded pinto to have duplicate modules by using the svn branch name and jenkins run number as the author
I did all this with shell scripts and I was planning to use https://wiki.jenkins-ci.org/display/JENKINS/ScriptTrigger+Plugin script trigger as a pseudo scm - there is also a multi scm jenkins module to add more than one scm in (although I only pulled two svn points with this)
The final piece I had was to scan svn to look for modules and auto add jenkins projects in using https://wiki.jenkins-ci.org/display/JENKINS/Job+DSL+Plugin. This is my favourite addin for Jenkins as its a job to create (or delete) more jobs. It uses groovy script but its a case of looking at the xml for existing jobs. It does come with a test harness for outside Jenkins too https://github.com/jenkinsci/job-dsl-plugin/wiki/User-Power-Moves
I have take a sideways step for the moment as we only need debian so I'm building debian perl packages. This is pretty arcane and could be a future project for pinto (or even present to a debian installation as a repository)
I can forward some of my code on Monday as I have no remote access
Please let me know if I can be of any help
Jeremy
On 8 Jun 2013, at 13:02, Hugh Esco wrote:
I've done no new work on that of late, but I can outline my next steps:
(1) use the tool where a successful build triggers the next build;
(2) create a new build called 'Add Artifact to Repository';
(3) make it an ad-hoc build;
(4) use the text box provided, provide a #!/bin/bash shebang line, then write in bash a
make dist
and apinto add
command using a token to send the correct version to the repo server;(5) have the script return true so it triggers the next step in the deployment pipeline.
On another front, I plugged pinto in my talk Monday here at YAPC, as well as a few other times since then.
Do you have a jenkins server to test this with? I'm starting a vacation as I leave YAPC::NA Saturday and will not have time to work on this going forward until my return to work on the 20th of June. But if you need, perhaps I could set you up with an account on our Jenkins installation in the mean time.
Let me know.
-- Hugh Esco
Date: Fri, 07 Jun 2013 17:29:30 -0700 From: Jeffrey Ryan Thalhammer [email protected] Reply-To: thaljef/Pinto [email protected] To: thaljef/Pinto [email protected] Cc: Hugh Esco [email protected] Subject: Re: [Pinto] control pinto from jenkins (#73)
I'm visiting a potential client next week and I would love to show them how Pinto could work with Jenkins.
Have either of you made any progress on this? Any new ideas?
Reply to this email directly or view it on GitHub: https://github.com/thaljef/Pinto/issues/73#issuecomment-19139823 — Reply to this email directly or view it on GitHub.
Do you have a jenkins server to test this with?
No, but I can make one if I need it. I've done it before.
I just hoped you might have some open source plugins I could point to, or a presentation/picture of your workflow that I could use to dazzle my (potential) client.
Hope you had a good time at YAPC. Have a great vacation!
I persuaded pinto to have duplicate modules by using the svn branch name and jenkins run number as the author
That's interesting. I always imagined that separate stacks would be used for each svn/git branch. And personally, I prefer putting the build number in the distro version number. But that might not play well with Dist::Zilla.
@renormalist has a long-standing request for a "foce add" feature that would override any prior distribution with the same name ( see #16 ). But I've struggled to find a way to do it whilst keeping the history sane. Would that be helpful to you here?
I was really thinking that I needed to keep the stack as is for anything used to release. As I was rebuilding everything I didn't want to keep incrementing the version if nothing inside it had changed.
I do like the create from scratch and fits nicely with the Jenkins approach where everything is repeatable
I would think a force add would impact other stacks which also contain that module
On 8 Jun 2013, at 18:27, Jeffrey Ryan Thalhammer wrote:
I persuaded pinto to have duplicate modules by using the svn branch name and jenkins run number as the author
That's interesting. I always imagined that separate stacks would be used for each svn/git branch. And personally, I prefer putting the build number in the distro version number. But that might not play well with Dist::Zilla.
@renormalist has a long-standing request for a "foce add" feature that would override any prior distribution with the same name ( see #16 ). But I've struggled to find a way to do it whilst keeping the history sane. Would that be helpful to you here?
— Reply to this email directly or view it on GitHub.