Continuous integration future?
A few days ago I was re-reading the book "Continuous integration" by Paul Duvall. I find it a really interesting reading, especially when you use agile practices.
The book dates from mid 2007, so is quite new, and there's a chapter at the end of it which really surprised me. It is titled "the future of continuous integration", and it focuses on two interesting questions:
- How can broken builds be prevented?
- How can builds get faster?
The first question is not a concern for us internally, but the second one is probably one of the toughest problems we've reached here at Codice. Can they be solved with version control?
The author starts examining the first question: can broken builds be prevented? And if so, how? Well, he states something that really shocked me:
Imagine if the only activity the developer needs to perform is to “commit” her code to the version control system. Before the repository accepts the code, it runs an integration build on a separate machine.
Only if the integration build is successful will it commit the code to the repository. This can significantly reduce broken integration builds and may reduce the need to perform manual integration builds.
Then he draws a nice graphic representing an "automated queued integration chain". He introduces something like a "two-phase" commit, so the code doesn't reach the mainline until the tests pass...
I don't know if I'm missing something because I find a too obvious answer, something all plastic users know by heart now... commit to a separate branch, rebase it from the mainline, run the tests, and only merge up (which would be a "copy up") if the tests pass... Branching is the answer, isn't it?
I mean, I couldn't understand such a "futuristic" set up with a two-phase commit scenario, if this is precisely what you already have with systems with good branch support.
I understand when he states "the only activity the developer needs to perform is to "commit"", his problem is not actually checkin the changes in, but being able to have a place where the code can reside in some sort of intermediate status and then, while the tests pass, the developer can continue working.
Again, I must be missing something here, because otherwise I only see one reason to find it a "future improvement": the author is always thinking on "mainline development" (you know, only working with the main branch, or just a few more at most, and directly checking in changes into this mainline). Because if you're used to patterns like "branch per task", then you don't have this problem anymore. You're used to deliver your changes to the version control system and continue working on something else without ever breaking the mainline.
He continues with:
An alternative approach to preventing broken builds is to provide the capability for a developer to run an integration build using the integration build machine and his local changes (that haven’t been committed to the version control repository) along with any other changes committed to the version control repository.
Of course it is! That's why branch per task is a better alternative than mainline development for almost every development scenario I've been involved into!
The problem behind all this statements has a name: the most well-known version control tools out there (including glorified Subversion, which is the tool the book focuses on) have (did I say have? I wanted to say have) big problems dealing with branches. They don't always fail creating a big number of branches (which is what every SVN or CVS user tells me whenever I mention plastic can handle thousands of branches... "mine too" they say), the problem is handling them after a few months (on the "test day" everything works great, doesn't it?), merging them, checking what has been modified on the branch, tracking branch evolution, and so on. And, believe it or not (and that's why we wrote plastic in the first place!) all of these well-known-widely-available-sometimes-for-free tools, lack proper visualization methods, proper merge tools (ok, there're third party ones sometimes) and sometimes even basic features to deal with branches like true renaming and merge tracking.
I guess that's the reason why after 200 pages of decent reading, I've found such an obvious chapter, describing as a "future innovation" some well-known and widely used SCM best practices. I'd rather recommend going to the now classic Software Configuration Management Patterns, which I still found the best SCM book ever written.
The question about how to speed up test execution remained unsolved...
Right on. I haven't used Plastic, but this is exactly the reason why I switched to Perforce from SVN a couple years back and have never looked back. Per task branches is so much more enjoyable. The only reason not to do it is an SCM that can't deal with many branches and merge and re-merge of branches. But breaking people of the "branching is scary and hard" mindset even when they have an SCM that supports it is amazingly difficult.
ReplyDeleteAs the author suggests, I think he is not seeing the reasoning for the two phase commit.
ReplyDeleteI have worked over the years with many lazy programmers that never check whether their code compiles on the main branch or any branch for that matter. They might well be working on their own branch but when the time cames to merge into the main one they do the merging but the resulting code might not even compile.
Even more expert programmers can't assure 100% that their code after merging will compile as there could be a missing reference, circular dependencies, etc.
So even with Plastic I actually think it would be a good idea if before accepting a commit to the main branch the version control system checked whether the resulting code would compile or not. This could actually be mede optional to comits to any of the branches.
Hi,
ReplyDeleteWell, I see your point, but I think it is all about testing. I mean, of course there are lazy programmers out there, and I see things get more and more complicated as soon as the team gets bigger and bigger.
With more than, let's say, 50 people checking in code into the same repository, you can't be sure they'll all know how to merge correctly, they're not in a hurry to leave, and so on. That's why, in the first place, I think controlled integration is an interesting topic to watch.
And also you're right even an experienced programmer can't assure his code won't break after a merge.
That's why we normally recommend:
* One of the experienced programmers plays the integrator role
* He merges each task back to main (he will ask the developer to rebase the task to the latest good baseline if needed)
* For each integrated task, he compiles (step one), runs a set of the unit tests (or all of them when possible) (step two), runs all the regression tests (if any) (step 3)
* If any of the steps breaks the build, he rejects the task and asks the developer to check it.
I mean, none of us can guarantee builds won't break, but I think we already have pretty good ways to manage it.
And I still don't see the two-phase thing...
Of course, everyone should compile on their own machine before check-in anywhere, but this can fail to catch errors for a number of reasons.
ReplyDeleteDevelopment machines are not very well controlled, and a build environment can get broken quite easily. When that happens, you have to choose whether to make do with it, or tear the system down.
Not to mention, developers can introduce a new tool onto their machine and neglect to tell the build team. Or maybe they won't run the entire build (because they don't know how, it takes too long or because they don't have the right tools).
You can get out of that situation by running the build on a dedicated build machine, but to do that you need properly to check in somewhere and run a build.
You can solve that with a two-phase checkin. You check in somewhere, then point the build to that repository and changeset.
Now, you can achieve that through branches (either branch per task, or have stable/unstable branches), as you suggest, or some other method. This is still doing pretty much the same thing as a two-phase checkin, though you're not actually *doing* a checkin...
In the past, I've used labels (in VSS), and seen similar systems used elsewhere.
This is still doing pretty much the same thing as a two-phase checkin, though you're not actually *doing* a checkin...
ReplyDeleteOk, but then, what's the point of not doing the check-in?
I mean, we're using plastic and cruise control. We use task branches. Each task branch gets built in a separate build box, where it passes all the tests. If something breaks, you (the developer) or someone at the release team can fix it, and the good thing is that, because it is on a branch, you still retain the whole history of what's going on...
Heh, your company obviously doesn't view broken builds in the same way as ours.
ReplyDeleteOf course, our incremental builds take over an hour, and the clean builds take about four hours, so any careless break means a longer wait to get a good build, which means that testing get on our case, and other devs can't see if their changes went ok.
Yup, we should probably split our build into smaller segments but the team (and code) organization doesn't favour that.
Isn't the real point of continuous integration defined by its name? To quote M.Fowler: "Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day". So if you stick to the "branch per task", does that mean you divide your tasks to be a couple of hours long?
ReplyDeleteAlso, I think one of the main features/benefits of CI is allowing everyone to do little steps of integration. The added benefit is that the knowledge of the code is shared by all people, since they have to learn about other people's code in order to know how to integrate. As for merging errors, that's why we have tests.
So having some sort of "Integration master" defeats the whole CI purpose, IMHO.
Interesting post Igor,
ReplyDeleteSo having some sort of "Integration master" defeats the whole CI purpose, IMHO
Yes, I agree, the problem is not all teams can afford having all the developers merging. Why? Because not all of them have the required knowledge or experience with the project.
Now another interesting thought you mentioned:
So if you stick to the "branch per task", does that mean you divide your tasks to be a couple of hours long?
My preference here is trying to stick to something like SCRUM, so, never make tasks longer than 16 hours.
Ok, my question is: if you do mainline development, you are not allowed to commit incorrect code (you break the build) so you can't actually use the version control to save your own checkpoints (as you can do with branch per task, for instace).
Besides, not all tasks can reach a stable status in as few as two hours IMHO. But you probably want to commit (so your code is safe :-P) before that... you simple can't with mainline development.
There are more issues, some of them I'd like to cover in a future post: to run all the regression tests some projects (it happens to us here at Codice to test each plastic release) take hours... so after merging to the main branch you need to wait before knowing whether it's stable or not. With branch per task you can continue working, you'd be stopped with mainline, unless, of course, you decide to go, but then you hit a bigger problem: shooting a moving target, which is a very usual problem for development shops in pre-SCM evolution status.
Hi,
ReplyDeleteMy bet here is: most of the version control tools pushing for main-line style of development (CVS, SVN and even Perforce) do it only because they can't handle branch per task.
Let's see what happens when SVN 1.5 goes mainstream... they're already starting to talk about branch per task now that they finally (it *only* took 5 years) have merge-tracking...
And GIT also pushes the same model...
The good thing for plastic users is that once they (SVN, CVS...) all reach branch per task (if ever), we'll be again years ahead them, like we're know with branch handling :-P
Then we'll read books about how good branching is... :-O
Go plastic go!!!
This is an interesting topic for me as I work on a continuous integration server (Pulse). The idea of preventing broken code getting to the mainline has been around for a very long time. There have been a couple of problems with it, however. Firstly, as you mention, a lot of popular SCMs have weak branching/merging capabilities. This takes out one appealing way of preventing breakage by isolating code into branches when using those SCMs.
ReplyDeleteThe second problem is even those systems where branching and merging is second nature require some extra effort to branch-per-task. This goes against the nature of CI, which should be as automatic as possible. After all, computers can do this stuff for us, so why create extra hassle?
This is why in the end we implemented personal builds in Pulse. A personal build (like the latter idea from the CI book) is one that runs on your CI server but contains your uncommitted changes. The developer doesn't have to change how they interact with their SCM at all - they just have the option to test whatever changes they want pre-commit. It took more effort to implement on our end (for more powerful SCMs it would be easier, but reality is we need to support what is popular), but the user need not worry about that!
Pablo, I understand your concerns, these are issues which need to be addressed specifically by each team.
ReplyDeleteI agree there's a problem when you make changes to the code that could break the build and you then cannot commit it for a longer time. I guess that kind of a situation merits a separate branch. But I think this can (usually) be avoided by making changes in really small steps.
On our project we use SVN with CruiseControl.NET and it is true that CC.Net is more suited for a single-branch development. We would have to do a lot of customization to be able to use "branch per task" principle.
Regarding long running tests: we try to stick to the "ten-minute build" principle. We separated the whole build into several stages in order to achieve this. The first stage runs unit tests, which typically run quickly, since they don't use DB. Second stage (integration tests) and third stage (web tests) run on a separate machine, so we can shorten the build time. The main idea is that the first stage is the one which is important to decide whether to merge or not.
The fact is still this: continuous integration means continuous :). The exact time cycle can vary, but the longer we wait for the integration, more (and harder) work will have to be done to integrate. And then you are "forced" to use "masters", because of the sheer quantity of merging that needs to be done.
As far as small merging steps are concerned, I cannot accept the argument that you cannot trust a team member to do the integration on the stuff he/she has changed. After all, nobody said that he has to do it on his own. Typically he would visit or contact the person "responsible" for certain part of the code to help him integrate. This way you let inexperienced people learn. On the other hand, if you have problems with indisciplined team members, that's more of a HR problem. :)
Jason, about your comment
ReplyDeleteA personal build (like the latter idea from the CI book) is one that runs on your CI server but contains your uncommitted changes.
still there is the point that you can't commit until the server completes the build / testing, which can be big time for non-trivial projects, so that means you can't start another task in parallel.
I agree with anonymous in that branch-per-task offers more benefits and will be the future choice for [free] systems once they get good support for it.
Hi again Igor,
ReplyDeleteOn our project we use SVN with CruiseControl.NET and it is true that CC.Net is more suited for a single-branch development. We would have to do a lot of customization to be able to use "branch per task" principle
Yes, indeed we're considering releasing a customized CC.Net which supports branch per task natively. I think it could be very helpful.
The main idea is that the first stage is the one which is important to decide whether to merge or not
I guess it will really depend on the project. In our case each developer runs:
- unit tests
- smoke tests (implemented in our PNUnit framework, now merged into NUnit)
- graphical tests
They all take about 20 minutes to finish.
Release tests take longer than 8 hours (involving several test machines).
So deciding if a task has to be merged takes at least 20 minutes or so. That's why we use staggered integration practices.
Other teams using plastic prefer to commit each task, so the developer uses his own task-branch, and when he's done, he triggers a build, and if it's correct it checks in to the main branch. This is continuous, but with a failsafe in between.
As far as small merging steps are concerned, I cannot accept the argument that you cannot trust a team member to do the integration on the stuff he/she has changed
I agree with you, but working for codice I have the chance to visit a lot of different companies. You find a lot of different situations. There are teams where they want to introduce some agile practices, others where they simply can't. There are also situations where a team leader has a big number of developers, and yes, he can have HR problems, but the fact is that he can't simply let all his developers to merge to the mainline, unless he wants to spend a lot of time solving problems. I don't say it is the way to go, I just say it happens. :-(
one of the anonymous writers said something interesting we're also aware of: all tools are moving towards branch per task, from SVN to GIT... wonder why? :-P
We've to rush to get something even better :-P
Dave,
ReplyDeleteI take your point that there is a challenge in practice with the build time. However, the benefit of getting your build times down is huge, so that is the first thing I would attack. A staged build as suggested by igor with slower tests separated out works well. The risk of breakage is greatly reduced by just running this and the developer can also run it frequently on their local machine. A judgement call can be made whether this is enough before committing based on the nature of the change.
Naturally, this means that you are no longer guaranteed to not break the mainline. However, if you have a long build you are forced into this compromise anyway - even if you branch-per-task. If there are many commits a day (a Good Thing), but your build takes hours, then your builds can't keep up with the commits. As changes can interact in unknown ways, your build is invalidated by any change that commits in the mean time.
Oh, and if you want to do another task in parallel all you need is a second working copy, which is no bigger deal than working on two task branches.
Hi Jason,
ReplyDeleteI'd be very interested in integrating Pulse with Plastic. How can I reach you guys?
Oh, and if you want to do another task in parallel all you need is a second working copy, which is no bigger deal than working on two task branches.
Well, I see you did your homework with SVN, but having the code waiting to be "checked in" to the mainline and leaving it on the developer's workstation is the reason why many people switch out of SVN in the first place...
I mean, with a proper branching system in place you can commit to a branch, then Pulse can download the code using standard SCM mechanism, build it, run the tests and if something fails any developer could continue working with this branch, and you won't be tied to the developer's worstation anymore, or passing zip files back and forth, or reinventing the wheel... just let the version control system do it for you... :-P
Okey, I guess if you're happy with your development process, then that's the most important thing.
ReplyDeleteAll I can say is that I recollect being an "integration master" a few years ago on a middle-sized project (we did "branch-per-task" on ClearCase back then, although "tasks" were pretty long, lasting few weeks). It wasn't a pleasant experience ;-P
Well, I would rather prefer it our tests could finish in 1 minute and not 10 hours... :-P
ReplyDeleteWe're always trying to find better ways to improve... also for our customers using plastic...
Haven't you heard Pablo, SVN is 'the best' scm tool. Well, according to their marketing team in a recent announcement. So what is Plastic? The bester?
ReplyDelete:-D
ReplyDeleteFrom the subversion website
ReplyDeleteMerge tracking facilitates the adoption of more sophisticated branching policies. With Subversion 1.5 on the horizon, many companies will want to re-evaluate their branching policies and adopt new ones that more closely fit the need of their development teams.
This webinar explains how to develop and implement branching policies that best fit your organization. The presenters will also show how to use Subversion 1.5’s merge tracking functionality to support parallel development on different branches.
Simply put, for years they act as "trunk-development" evangelists saying it *IS* the way to go... only to hide the fact that their marketing tool (a.k.a. subversion) wasn't able to handle branching correctly...
Then they implement decent branching (let's see whether they ever release it) and the practice (branching) is not doomed anymore for them. Shame!
More than 1M users worldwide use SVN on a daily basis just because they don't have to pay for it, but the marketing department at svn did a very good job: they convinced everyone their technical limitations were, in fact, features... the best scm they still say!
jon
Jetbrains TeamCity 3.1 offers a tempting alternative to the separate branch approach for two-phase checkin with its pre-tested commit feature.
ReplyDeleteWhen committing changes to the SCM (from within IntelliJ, Eclipse or Visual Studio) the changes are first tested on the continuous integration server, bypassing the SCM. Only when the build succeeds the builds are automatically committed.
Ruben
Maybe you find this one interesting too.
ReplyDeleteTalks about integration different integration strategies.
Very interesting topic! One of the more controversial in my opinion.
ReplyDeleteI think Igor´s argument is solid. Of course both mainline development and task-per-branch approaches have their own advantages/disadvantages, and I personally believe that it has nothing to do with the weakness of the tools.
The problem with mainline coding is that it does not allow to easily remove features from the current development if they are not to be included in the final product. But the mainline coding is suposed to work with agile practices where 1) nothing changes during a sprint and 2) changes should be done in such small amounts that you should never have to have even the posibility of assessing if a requirement would be incorporated or not (it WILL, because it is so cheap...)
The problem with task-per-branch, which sometimes worries me, is about lazy developers and the risk of delaying the integration to the end. That´s somehting that happened to me a lot, and is not pleasant. It is a well known best practice to integrate as soon as possible and the less changes as possible, to reduce risks and fix faster.
The "unsafety" of not commiting has nothing to do with mainline development and "pre-commit" testing approaches. SCM Repositories shall not be used as backups. They are aimed to *share* new product developments and bug fixes with the rest of the team. So, why would you ever want to share something that isn´t working?
An advantage of task-per-branch is that you can have "versions" of your work in progress and go back before you actually do the "commit" (I don´t even like to call it "integration") to the main line.
Of course, again this has nothing to do with tools, if the tool is good enough and you have the features availables (like in Plastic), better! So you can use when you decide so! :)
You want speed? Concerned about branching? Like distributed versioning? Then, forget about Plastic. You want (sorry, I meant WANT) GIT.
ReplyDeleteWhat's all this FUD with "well-known-widely-available-sometimes-for-free"???
Did Codice invent branching???
Really, guys, stop marketing and start using valuable tools.
Did I mention Plastic is .NET??? Great idea to use the best-ever file system... is that NTFS???
I see Codice doesn't want to argue with anyone that doesn't think Plastic is the best tool around... shame... and, why not? good luck!
ReplyDeleteMy argument is still the same: use a proper file system and maybe, just maybe, you will draw attention from outside your blog and also, maybe, just maybe you will sell a couple hundred licenses... again, good luck!
Did Codice invent branching???
ReplyDelete:-D
No, actually GIT did it... :-O
Did I mention Plastic is .NET??? Great idea to use the best-ever file system... is that NTFS???
Well, Plastic doesn't use NTFS for storage, at least not directly. We do use standard database backends for both data and metadata. Currently we support MySql, SQL Server and Firebird (which is the one installed by default).
Yes, AFAIK ext3 and ext2 are faster than NTFS, or at least this is true for our tests.
We were running a Linux and client server (on different machines) yesterday and checking our update time (you know, downloading files from a server) against a local GIT test with the same repository (well, actually just one working copy in GIT against a plastic 10Gb repository, anyway the working copy had about 400Mb, but I don't think git nor plastic get too affected by the rep. size). The update was taking about 25s for plastic and 30s for GIT. Plastic was sending data through the network (reading from a MySql database) and GIT copying locally... But of course update is our fastest operation... We're working on improving all the basic ones...
But I don't think it makes a lot of sense to compare plastic with git, anyway. They're very different and designed with different goals in mind also...
I see Codice doesn't want to argue...
ReplyDeleteOoops! I guess you had some delay getting your comment published... my fault!
We're always open to talk about version control... in fact... it's our job!! :-)
So, suggestions, feedback and strong arguments are always wellcome...
Again, I think you're missing something here... we don't use a wrong filesystem... we don't use a filesystem at all!!
Using a database backend speeds up development (you're not reinventing the wheel, although I don't discard implementing a FS backend in the near future), gives you a lot of robustness, and scales pretty well. In fact, I don't see why anyone buying a version control system will care about using "a proper filesystem" at all. At least I didn't find it so far (and fortunately we're quite beyond the number you mentioned)
Also, plastic is strongly based on C# and Mono... in fact a number of our servers are running on Linux systems thanks to the mono implementation, which gives a great flexibility, and since 2.0 also the GUI is fully supported...
I think that the philosohpy behind continuous integration (and in avoiding feature branches) is not a work-around to the problem of textual merging. I agree, SVN is certainly not a great tool for merging. Semantic merge (where there is a change in business rules, for example) is a much harder problem, and one that a tool will not be able to solve.
ReplyDeleteAnother big issue (particularly with long-lived feature branches) is that they constrain developers from refactoring the code - refactoring makes merges harder. So, developers will see a problem and will leave it alone to avoid the pain of merging.
To me, CI is a philosophy that aligns well with other traits of healthy agile development (commit frequently, work in small units of work, refactor eagerly, don't break the build, etc.) - it has less to do with the tool, and more to do with managing risk and ensuring that all developers are on the same page.
@Anonymous (it would be great if you add your name... :P) I don't agree with you. I think the greatest thing of "feature branches" is that the version control becomes much more than just a "delivery system" but a real power tool for developers. You can checkin as often as you need... a game changer.
ReplyDeleteActually, I apologise - I think I have been talking cross-purposes with you. Whilst I stand by my earlier statement about not liking long-lived feature branches, I am a big fan of short-lived task based activities. After reading through your 'Introduction to Task Driven Development', I now realise that you are referring to short activities as opposed to months-long mega activities (which themselves comprise of hundreds, if not thousands, of tasks) - I have seen the latter fail spectacularly!
ReplyDeleteSo, I think we are actually on the same page - as long as the tasks are discrete and are short enough. To me, this is a refined version of main-line development - developers still treat the trunk as sacred, but they now have the sandbox of their own task to work within. Nobody diverts too far from the common base because baselines and merges are frequent.
I have previously worked under a similar branch-per-task model (aligning with Jira tasks for traceability) with Mercurial, and was a big fan.