In this post I will summarize some of the main principles of Jez Humble and David Farley’s great book on Continuous Delivery and share my experiences of applying them on software projects I’ve been involved in. First, I’d like to share a story to give this discussion some context.
Meet Joe
Joe is a software developer. It’s a regular Friday afternoon, 15 PM. Before heading home, Joe has to start this thing called a “weekly build”. Joe makes sure everyone on the team has checked in their changes to the source control system.
Joe starts the fully-automated build at the click of a button. The current build process takes some time, so he grabs a coffee. Five minutes later Joe returns to his computer, only to be shocked by what his monitor is displaying:
build failed.
Joe’s palms suddenly get sweaty, because he’s been in this situation before. It has been a week since the previous build was performed. He inspects the version control history: his team has committed tons of changes this week, most of them having self-explanatory descriptions like “Fix for Bug 1421, should work now”.
The operations team will be breathing down his neck waiting for the binaries, since the quality assurance department expects a version of the product containing the latest features to start testing Monday morning. Having no idea how long it will take him this time to fix the build, Joe grabs another coffee and calls his girlfriend:
“Don’t wait up honey, I’ll be home late.”.
Have you ever been in a similar situation? I feel your pain. Please read on for some ideas that will make your life easier and your boss happier. Otherwise, your team may have picked up the practices of continuous integration (or to a further extent, continuous delivery). Good for you!
Continuous delivery in a nutshell
One of the main goals of continuous delivery is to reduce “cycle time”. Cycle time is the time between deciding to do something (fixing a bug, or implementing a new feature) and pushing it out to your client. From a narrower (more technical) perspective, it also provides the benefit of a very short feedback loop. Wouldn’t it be awesome if you could verify the consequences of changing a single line of code to a production-like environment within minutes? Both business and technical value of reducing cycle time is obvious, so the principles behind continuous delivery are no-brainers for me.
How do you go about reducing cycle time? The book introduces the concept of a deployment pipeline. Each commit passes through the pipeline’s stages (a generic pipeline usually contains pre-commit, commit, acceptance test and performance test stages, but this lay-out can be as heavy- or light-weight as you want it, depending on the project). After these stages, developers, operations or testers can automatically deploy the fully-verified version on any machine at the push of a button.
A quick overview of the stages that are most important for my current projects:
Pre-commit phase
In this stage, you have just finished your work. You pull in the latest changes from other developers and run all the unit tests. If the bar is red, you fix the integration problem. If the bar is green, you can commit your work and relax for a minute or two. This step raises the confidence of your changes integrating nicely with all the other code and with changes recently made by other developers.
There are some prerequisites for this approach to work:
1. Everyone works on the same main branch
No branch-by-feature. No branching of the main branch for several weeks to try out a new architecture. Branching increases integration cost, hereby effectively increasing the feedback loop. You don’t want to wait a month to see if your changes integrate nicely into the main branch. I have had the pleasure to work with long-separated branches: merging changes from one branch to another can be a real pain, even with good test coverage and impressive tool support for merging conflicting files.
2. Developers regularly check in completed work
The regular part of this principle follows the same rationale as the one above. The shorter the feedback loop and time between integrating your changes, the better. The completed part is equally important: your code base should be in a releasable state at all times. The rest of the pipeline depends on this constraint, since every commit can result in a potential deployment to production -after passing through the entire pipeline-.
3. You have a comprehensive and fast unit test suite
It’s always a good thing to have fast unit tests. I use the shortcut to run unit tests almost as much as I use Ctrl+S. In a continuous deployment context this constraint also makes sense. Since developers will be committing very frequently and run the tests every time, you better not have a suite that needs 30 minutes to run. When you apply unit testing best practices of mocking, sticking to the right level of granularity, not hitting the database, etc. this should prove to be no problem (for more information on unit testing and TDD, I strongly recommend Growing Object-Oriented Software Guided by Tests).
4. Everything is under version control
This means no magic -and more important- unversioned Excel-sheet that populates your application’s static data. No deployment script or server configuration information that only resides on the machine of some operations team member. This is necessary for the deployment pipeline to build your application from source control, but also grants all the added benefits of version control like easy distribution among team members, parallel editing of any file, the possibility to revert to a previous version and a history of changes. For free!
5. Your build is fully automated
This does not only significantly speed up build times (which is necessary as your application will be built very often, i.e. at each commit), it also reduces the chances of human error to zero. Your builds become fully reproducible, you can fall back to any version when a customer reports a bug in version 1.3.34 of your software. Don’t customize your build scripts if you don’t have to. For example, .Net project files can easily be compiled from command-line without any modification. If you really have to change them (perhaps due to some legacy steps in the process), stick to a well-known build framework like MSBuild (where you can basically just edit the solution file to incorporate more build steps, like calling a custom compiler) or Ant (mainly used on Java projects, but the sky is the limit). Most importantly: treat your build scripts with the same diligence as you treat your source code: put them under source control and apply best practices like removing duplication and choosing meaningful names.
Commit phase
In this phase, your continuous integration server picks up your commit, builds the entire application and runs all the unit tests against the entire code base Why re-run the same tests you just performed on your machine? I’ve only been working in a team for a year, and I have seen more “works on my machine” bugs than I can remember. By compiling and running the tests on a system that is not the original developer’s machine, you ensure that no absolute paths to “C:/Users/ProgrammingColleagueA/My Documents/TheMostAwesomeProjectEver/App.config” remain and that every file has been checked in correctly. 2 less reasons for a broken build!
Acceptance test phase
Once the commit phase has successfully finished, this next phase in the deployment pipeline automatically kicks off.
This phase will run acceptance tests against the binaries in a production-like environment. Unit tests ran in the commit phases can give some confidence of the quality on a technical level, but this does not directly translate into real business value. Acceptance tests are scenario tests of your application, ran on a production-like environment.
Since these tests tend to be a lot slower (partly because of the deployment to a production-like environment), this step is not part of the commit phase. Ideally, these tests should be written in a cooperative effort of the whole team (business, testers as well as developers).
Acceptance tests provide proof that the application meets its acceptance criteria and protects against regressions on a higher level than unit tests. You can just use your unit testing framework like the xUnit family to create and run these tests. If business people -sometimes less proficient in programming languages- cooperate on these tests (which they should be doing!), some tools worth investigating are FitNesse and GreenPepper. These tools allow you to create acceptance tests in a more “natural”, decision table style.
Acceptance tests should run against the same binaries that were built in the previous phase. Actually, every following stage in the pipeline (including deployment) should use the same binaries created in the commit phase. No matter how “reproducible” your builds really are, this is the only way to know for sure that a specific release has passed all stages of the pipeline successfully. These binaries are stored in an artifact repository, ranging from a strictly managed shared network drive to a full-fledged application like Artifactory or Nexus.
All this time, the developer that checked in his code should have been on stand-by.
From checking in his changes to a green bar in the acceptance stage, the developer should not start any new work. This is required because if things go wrong (i.e. tests fail), the developer himself is responsible for fixing the problem he introduced as soon as possible (no one checks in on a broken build). But since the amount of work he performed is small (due to the frequent commits), it should be relatively straightforward to pinpoint the problem. This point again highlights the importance of fast tests and builds. You don’t want to have developers going for a fifteen-minute coffee break every hour or so.
Manual fit-for-release testing
In this stage, your binaries have passed all the unit and acceptance tests. Testers can now deploy this version to a test machine and manually verify the correct behaviour, performance, look-and-feel,… of the application. When they are happy with the version, they can declare it as “fit-for-release”. Note that the manual testing is now limited to things a computer does not do very well, like testing the look-and-feel. Having a comprehensive set of tests shifts test efforts -which comprise a huge part of your IT-budget- from boring, repetitive regression testing to the cool stuff like demoing your application and exploratory testing. A comprehensive test suite also takes effort to maintain, but when done right and embraced by the entire team, I expect those costs to be considerably less than a battery of human regression testers.
Automatic deployment
After a version of the code base has passed all the previous stages in the pipeline, it can be deployed at will. Anyone that needs a specific version of the application can choose to deploy it on a machine he has access to (be it a test server or his own laptop). This effectively transforms deployment from a “push” (weekly builds, remember?) to a “pull” system.
In order to “pull” this off, there are some things worth noting:
1. Fully automate your deployments
Time for another story.You just finished some pretty amazing feature. You wrote a comprehensive set of tests. You did some exploratory testing to make sure the thing works end-to-end. Satisfied with your work, you check in your code and forget all about it. A while later, you get bug reports about that very same feature. How could this have happened? After investigating your code, not finding anything and finally resorting to inspection of the tester’s machine, you notice that the configuration parameter that is required for your feature is not filled in correctly. It is impossible to prevent all typo’s, so the only way to avoid this category of so-called “bugs” is to fully automate your deployment. No manual steps should be necessary. What about configuration information that depends on the specific environment the application is deployed to? Use a mapping file per environment that maps the variable names to specific instances and use this file during deployment to fill in the variables.
2. Use the same script for every deployment
There should only be one version of your deploy scripts used forevery deployment, be it to a test machine, an acceptance test environment, a demo environment or a final deployment into production. By enforcing this constraint, you ensure that:
- No inconsistencies creep up between different versions. I plead guilty to having maintained multiple deploy scripts before: on a project with very long build and deploy times, I opted for using a separate deploy script for the shared development environment to “speed things up for the developers”. You won’t believe how many so-called “works on my machine” bugs arise from not always keeping the deploy scripts in sync.
- Your build scripts stay comprehensible and well-factored. The script is stripped from any environment-specific data, because it has to work for any environment. Since all developers and the entire operations team use it, it must be well-written, -maintainable and thus understandable.
- Last, but definitely not least: Deploying to production becomes a stress-free activity. When deploying to production you can be fairly sure that things will go as planned, since the deploy scripts have been tested many times before.
Managing the pipeline
When using the deployment pipeline, strict management of your versions becomes even more essential. Which build has passed acceptance tests and can be pulled in by testers at the click of a button? Has my commit broken anything, or has it already been deployed to production? You can write your own lightweight dashboard to manage your pipeline, or you can turn to off-the-shelf products like Go.
Is it worth the trouble?
Continuous Delivery is not a destination. It is a journey in the true agile/theory-of-constraints sense of the word: Given our current situation, where does it hurt the most? Fix this problem guided by the principles of continuous delivery. Lather, rinse, repeat. If you have the bigger picture in mind, every step you take towards this goal yields significant benefits.
I have worked on projects that started without even so much as version control. Moving from this state through a version controlled code base, through automated builds and tests and finally to automatic deployment really is a very rewarding experience. It not only reduces repetitive workload and stress on both development, operations and test teams, it significantly increases the quality of your software every step on the way.
Like this Blog?
Then you’re in for a treat! Visit Jo Van Eyck's personal blog, which offers many more hours of excellent reading material!