I saw a talk by Dave Farley a while ago in which he said that everything should be under source control.
It makes sense; with source control you can go back and forward to any version of something you want.
Made a mistake? No problem, roll back to the last known good code.
Need an audit trail for changes made? Just check the commit/check-in log
Need a single source of truth? Done, no more copy/sharing property files
Let’s look at what makes up a production system and how this idea applies.
Application code: was adopted very quickly to versioning.
Configuration: this followed soon after (not the passwords though!).
Infrastructure: A more recent addition, now that we’ve got tools like Terraform, Helm and Cloud Formation that let you define your servers, networks etc as code and automate building them.
Infrastructure-as-code is an interesting area that blurs the traditional line between the software development team and operations. It used to be there’d be one team to write the software and then they’d throw it over the wall to a separate operations team to deploy it.
Ever heard of Conway’s Law? The systems and processes in your company will tend to mirror the structure of the organisation itself.
Having separate teams introduces a gentle communications barrier. Naturally this leads to mistakes, misunderstandings and frequent rollbacks. Eventually the dev team will view ops as conservative killjoys and ops will see the dev team as cavalier cowboys.
The solution, as many companies are now finding, is to blend the teams. This is particularly important when you start defining your infrastructure in code. The tendency is to give that work to the operations team because… well, duh…it’s for defining their infrastructure right?
The problem is that the operations team are now writing code, something they won’t have extensive experience with. There’s an art to writing code that is organised, understandable, maintainable and testable. With the best will in the world, someone without enough software development experience won’t be able to do that and you’ll end up with infrastructure code that’s difficult to use, understand and modify.
On the other hand, a software developer won’t have the intimate knowledge of building and configuring servers, databases, networks, routers that make up a production system.
So you blend them and get devops. Nothing new there. You’re automating existing development and operations processes to make them faster and more consistent. Click a button and your new code/configuration/server setting is out to production.
But what if you extend that idea of automation a bit further? Automate the deployment of your pipelines themselves. We have pipelines as code with Jenkins DSL but you have an initialisation problem with the Jenkins servers themselves that run the pipelines. A mess of plugins and config needed to get them up and running, sadly not amenable to source control. Perhaps GoCD would work better?
I’d love to see something that could initialise part or all of the whole system: pipelines, infrastructure, application builds, deployment, from scratch based off a repository. I’ve seen it mentioned as Repository Driven Development in a couple of places online but it doesn’t seem to have filtered into mainstream use yet, at least in the Java world where I live.
Granted your pipelines aren’t going to be changing as often as your application code but when you do want to make changes, any mistakes impact everything that depends on them, slowing development and blocking releases. The ability to easily deploy pipelines at will from source control let’s you create copies to test without affecting anyone else and rollback mistakes, just like any other good software.
2 thoughts on “Breaking down barriers”