DevOps is a philosophy, not a title

In countless occasions, when searching for a job I’ve seen these titles:

  • DevOps Engineer
  • DevOps Systems Engineer
  • Website and Cloud DevOps Engineer
  • DevOps Superstar
  • DevOps Lead Developer
  • DevOps Automation Engineer.
  • Head of DevOps

But often, the job description includes things like:

  • Monitoring
  • DNS
  • Production systems support
  • Firewall setup
  • Disk images
  • Pager duty

How is any of this related to DevOps? In fact, there is no specific set of skills that a “DevOps Engineer” should have because there is no such thing as a “DevOps Engineer”. Such a title makes as much sense as “Agile Engineer” or “Senior Waterfall Developer”.

DevOps is a methodology, a way of doing things, not a task. Why do you think that a lot of web companies, both big (Google, Facebook) and small (Etsy, Flickr, Intent Media) are being successful at this DevOps thing? It’s because they do Devops, and not have a DevOps team.


Release Engineer vs Release Manager

I often see these two terms used interchangeably when in reality I don’t think they are. The release engineers are software engineers first. They know about the development cycle, branching strategy, versioning, etc. Release managers are not necessarily engineers. In fact, a release manager is more a role than a title. Someone in this role will coordinate the actions that need to be executed in order for the live release the going out. In this sense, the release engineer can put on the release manager cap, just as much as the project lead or senior architect can.

In an ideal world, a release manager would not be needed. The release engineering would be in charge of setting up an infrastructure such that the steps required to make a live push are automated. To paraphrase Beyoncé: “If you like it, put a button on it”. Etsy’s Deployinator is a very good example of what an automated release manager should be, but you particular solution doesn’t have to be all that fancy and it can boil down to a few deployment scripts cleverly parameterized and executed from any CI tool.

It is understandable that for many organizations, a release manager is needed, but this shouldn’t be seen as a permanent requirement. Mixing in a engineering approach (that can come from the development engineers), even if it’s just part time, will gradually make the deployment process smoother. Even if it sounds counterintuitive, good release engineers should aim to put themselves out of a job, by automating every step of the process.

Why developers shouldn’t deploy artifacts

Well, the title is a bit misleading. Developers should be empowered to deploy artifacts, that’s the whole point of the DevOps philosophy; just not from their local environments.

Developer Alice is working on a feature on Module 1 with version 1.0.0-SNAPSHOT. Alice finishes the feature but she hasn’t pushed her changes.

At the same time, Bob is working on a different feature on the same Module 1. He finishes, commits his changes, checks for any incoming changes and finds none. At that point he figures (correctly) that his changes are the latest on the repository, so he pushes them successfully and then does mvn deploy (or whatever the deployment command is).

Now Alice, decides that it’s time to push her changes, but she deploys first (thinking there aren’t any other incoming changes) and runs mvn deploy before updating her workspace. At that point, because the version is a SNAPSHOT, the most recently deployed artifact (containing only Alice’s feature) will overwrite the previous one (containing only Bob’s feature), thus leaving it in an inconsistent state. Any downstream projects depending on this artifact run the risk of becoming unstable.

Now, Alice will probably try to push her changes, but realize that she has to merge them with the incoming ones. Most likely, Alice is a disciplined developer and she will realize that she’ll have to deploy again to make sure the artifact contains both Alice and Bob’s features and it’s consistent with the state of the repository. But this is still a manual, human (and hence error-prone) process.

If we leave the deployment step to a machine (i.e. the CI tool), we can guarantee that there’s only going to be one workspace off of which the artifact will be packaged, making it consistent across the board. It doesn’t have to be anything fancy (although the Gradle release plugin and Maven release plugin are very helpful), just as long as there’s a unique way for artifacts to be created.

One objection to this approach I once heard was: “But why are you trying to make it harder for developers to deploy?”. Well, yeah, it certainly becomes a barrier, but the reason is pretty much the same as why you lock your front door or put a password on your computer. It’s a necessary hurdle to avoid other risks.

Most SCMs today will allow you to add post-commit hooks such that a deployment can be kicked off every time there’s a change, making it even simpler to publish artifacts since it becomes a hands-off process.

Exposing your build system API

Nowadays, there are multiple CI tools: Jenkins/Hudson, Bamboo, TeamCity among others. Some of them are free, some others are licensed and they all offer different sets of features. Any of these tools is a great way for software development organizations to quickly jump on the Continuous Integration/Continuous Delivery train.

This is what the typical first project is organized as far as building and deploying goes:

Screen Shot 2013-02-19 at 11.46.53 AM

You have your project code in source control, being checked out by your CI tool, by a single build plan which executes a set of build scripts that may be either embedded in the build plan, checked into a separate source control repository or saved in some other way.

This works fine until you have to create a second project; then, the chart looks like this:

Screen Shot 2013-02-19 at 11.53.02 AM

Two projects, one CI tool, two build plans which are most likely a copy of each other, and both use the same set of scripts. One important thing to notice here is that the build plans are exactly like each other, but will fall out of sync as soon as the needs for one of them change. Multiple projects lead to build plan duplication and as the number of projects grow, the abstract mapping of projects <-> build plans gets uglier. Sometimes even build scripts get duplicated:

Screen Shot 2013-02-19 at 11.56.44 AM

This is far from ideal.

How can we tackle this with an engineering approach? One thing comes to mind: what if we create a domain-specific language layer on top of our build system? This would expose an API of “verbs” or actions that our build scripts can execute, such as package, build, test, deploy, etc. Then the CI tool would only interact with this API. It would look like this:

Screen Shot 2013-02-19 at 12.10.50 PM

We can even take this idea one step further. What if we package this, call it Build System 1.0 (or whatever quirky name you can think of) and publish it to your artifact repository?. Then we can incorporate it into our build plan such that all it does is “press the buttons” of the actions exposed.

This build system package can then be bootstrapped into the project. The details of doing this will vary depending on what framework is being used (Maven, Ant, Gradle) and in some cases might be tricky, but the concept it’s the same. This is not absolutely necessary and can be skipped if it becomes too complicated to implement.

At this point our map looks similar but with some key differences:

Screen Shot 2013-02-19 at 12.57.31 PM

We still have multiple projects, but the CI tool becomes a mere button-presser or “dumb” broker of the actions that are being invoked. We also still have multiple build plans but they are thinner and we don’t care if they fall out of sync, because they’ll be small enough that the difference can be ignored. And more importantly, the build plans are interacting with our brand new API. Also, since all the logic is encapsulated in one place and the history is tracked by the SCM, we are consolidating the knowledge of our build process.

In this sense, the build system becomes much like a software module:

  • It has an API
  • It’s extensible.
  • It’s releasable.
  • And it can be open sourced (either internally or externally if it’s generic enough).

On top of that, there’s an economic benefit. By reducing your build tool to an on-demand button-presser, there’s no need to spend big bucks on tools that will execute the exact same actions as the free ones.

You’re welcome.

Continuous Integration vs Continuous Delivery vs Continuous Deployment

The word continuous is now used to describe several things in the industry: integration, delivery, deployment, experimentation, monitoring, improvement. There’s nothing wrong with that, but I want to define what I understand by the first three.

I see three degrees of continuity in the world of release engineering.

  • Continuous Integration: The easiest of the three. Set up your Jenkins (or whatever you use) to check out the source, compile and run the tests with every change or periodically. Add a dashboard or push the results via email.
  • Continuous Delivery: This doesn’t necessarily mean deploying with every change, but means being able to do it. It may not be feasible for business reasons, but the application should be releasable at any point. Sadly, in some cases, the only time the software is releasable is when it’s actually released.
  • Continuous Deployment: Deploying every time a change is made to the code base. Yes, this sounds almost impossible, and for a big number of organizations, this will never happen for a number of both internal and external constraints. However, in the words of Bruce Lee: “A goal is not always meant to be reached, it often serves simply as something to aim at”. There is a number of best practices and good habits that must be followed to be able to reach this stage of maturity, but mastering at least some of them will certainly be a step in the right direction.

Day 1

You walk into your new job, day 1. You start poking around and realize that there’s a reason why they hired you (or at least why they hired a release engineer at all). Some solutions are over-engineered while some others are just patches, people ask you what exactly do you mean by “deploying” and you look around for the CI dashboard but you find none. At this point you’re probably either very overwhelmed or very excited.

In any case, you will have to start somewhere, but the question is where? The less than ideal branching strategy? The lack of understanding of what a SNAPSHOT is? The absence of Jenkins, Bamboo, TeamCity or whatever your CI of choice is?

The dichotomy always seems to be: grab the low-hanging fruit or tackle the biggest pain? Get a quick win or plan for long-term dividends? Assuming you have a certain degree of freedom, the best choice is probably getting something done as fast as possible and establish yourself as the authority in this area. Because that’s what you are (remember questioning why they hired you?)

This is the first post of (hopefully) many in which I will try to document my achievements, frustrations and pet peeves in this world of release engineering. Hopefully, they start valuable conversations and also help connect people interested in Release Engineering.

Anyway, I wanted to actually document something, so here’s a couple of lists of things that you may relate to. I will expand on some of this points in future posts.

Pet peeves:

  • Call it a build instead of a release, a deployment or a push. Well, actually the pet peeve is not about calling it a build. It’s the fact that it’s actually a build. That is, the source is compiled and/or packaged every time it is pushed to a different environment.
  • Along the same lines of the previous point: calling things other than what they are. Mercurial manages repositories, Maven deals with projects and modules, Jenkins executes build plans. It makes for much clearer conversations.
  • Packages versioned 1.0.0-SNAPSHOT forever. This either indicates a lack of understanding of what a SNAPSHOT is or the fact that you wanted to put a version on something that doesn’t really need one.


  • No tests. I think we’ve all been there. You search src/test only to realize that there are no tests. No JUnit, no Jasmine, no Selenium. Nothing. You think: “Oh, maybe they’re in a different source control module”; nope. “How are you assessing the quality of the application?, “Well, QA executes the tests manually“. Oh boy.
  • Writing scripts to automate things. Ok, this is not really a bad thing. Unless the reason for automating is just to save time on something that’s a pain to do every time. Well, guess what? What you’re really doing is just automating pain. This strategy usually just masks issues that have to be addressed with a sustainable solution rather than just a temporary patch.
  • Versioning a web application. Version numbers are useful for many other things, but not for an application over which users don’t have control. When was the last time you decided to upgrade Facebook to the latest version? Throw that version number out the window, you’re never going to need it for backwards compatibility, or for troubleshooting purposes, because those versions will never exist again.


  • Setting up a CI server is easy. You can get pretty much any tool to check out and compile all the source control modules developers use. Even if there are no tests to execute, at least you make sure that developers get something that compiles and they can work on.
  • Lock down the artifact repository. Don’t let developers deploy from their machines. Their local workspaces are usually considerably different from each other and this causes inconsistencies in the artifacts produced, specially if dealing with SNAPSHOTs. Have the CI server (perhaps the one set up in the previous point) be the only entity that can deploy artifacts. Not even you can.