Why Continuous Integration Is Important
There are many reasons often cited for why continuous integration is necessary, but none are so important, so essential, as trust.
Let me elucidate; veteran developer Robert “Uncle Bob” Martin says that one of the core failings in modern software development is the breakdown in trust between developers and the rest of the business — a breakdown in trust caused by software developers promising deadlines they can’t deliver on.
Everything starts out fine, with management asking the developers for the amount of time it will take to implement a feature. The developers provide an answer, and management takes them at their word.
Inevitably, one of two situations results: the deadline goes by yet the feature isn’t finished, or the feature is implemented on time, but it’s either faulty, creates new bugs, or both.
Trust between management — and everyone else in the business — gradually breaks down, resulting in a combative Us versus Them dynamic. Ultimately management learns not to take developers’ answers at face value.
The Fundamental Reason for Continuous Integration
Whether you’re a software developer, a manager, a business analyst, or a marketing person, I’m confident you can relate to Uncle Bob’s reflection on the modern dynamic between software developers and everyone else in an organization.
How often has the development team at your company failed to deliver on time?
How often have promises been made, only to be broken? How common is it for software projects, in your experience, to run over either time or budget? Have you ever stopped to think that this shouldn’t be, needn’t be, normal?
Maybe not. You may not see this as abnormal, it’s that common.
Back in my university days, stories of bugs in software were commonplace. We joked in programming classes about them, maligning those responsible, saying how inadequate of a developer they must be.
All the while a fear lingered, a fear that, in time, I’d be that developer. I believed that it was my fate and the fate of all developers that, despite our best efforts, it was just how software projects worked.
We could never really know, with any predictable certainty, exactly how long a bug would take to fix, a feature would take to implement, a project would take to deliver. We could never catch every bug or indeed stop the random ones from occurring.
Sure, we had lots of theories, principles, tools, and practices. But in the end, the same result would occur. We’d try our best, attempt to learn from the experience, and promise to do better next time.
I went through this rollercoaster year after year, never believing the ride would end. That is until a wise friend introduced me to a new philosophy: continuous integration.
What Is Continuous Integration?
Veteran software developer, Martin Fowler, defines it as:
Continuous Integration is a software development practice where members of a team integrate their work frequently; usually, each person integrates at least daily… Each integration is verified by an automated build… to detect integration errors as quickly as possible.
That’s a striking statement, jarringly at odds with the stereotypical tale I’ve laid out. Continuous integration suggests an entirely different approach to development.
Consider this. An average developer may view development as composed of the following steps:
- Determine what needs to be created or fixed
- Implement the feature or fix
- Manually check that their feature works or fixes the bug
- The job is complete. Move on.
Can you spot the fundamental flaw in this kind of thinking? When developers work in this way, and it’s more common than you might think, their software isn’t verifiable. Testing manually only considers the functionality of one specific aspect of the system and only at that point in time.
What developer hasn’t seen this happen more than once:
- Developer makes a change to a system
- Developer fires up the browser and reviews the part of the application pertaining to their changes
- Developer sees that it works for them and closes the ticket
- Developer moves on to their next task
This is not to devalue anyone’s skill. We take the requirements we’re given and we implement them. We then used them to create a test scenario which verifies whether they worked or not. We see that the changes met those requirements, so we move on. Makes sense.
But consider the following questions:
- What happens when someone else changes another component of the system, a component on which this new feature depends?
- Did the developer change something on which other code depends?
- Have they considered every use case?
- As the system continues to change, will their manual test still be applicable?
- Will they keep coming back to manually test all their previous changes as they, and others, make future changes?
- If they manually check everything they’ve ever modified or implemented:
- How long would it take?
- Would they repeat the test the same way every time?
Do you see where I’m heading? This approach can never work long term. It will never scale and can never produce the same outcome every time. Consequently, it will be abandoned or only implemented in a haphazard manner. It’s okay for a superficial review but nothing more.
For each new feature or bug fixed, a new bug appears. The developers are confident they’ve checked everything and that it couldn’t have been them. But could it?
We Need a Reliable Process
We’re all human, and we’re all subject to a large variety of stresses, including financial, emotional, time, health, relationships — the list goes on. Each of these stresses inhibits our top effectiveness.
But let’s assume that we were on top of our game. Think for a moment about just how complex and sophisticated modern applications are — how many moving parts, how many inter-related systems they have.
Can any one person, no matter how proficient they are in software development, be expected or able to keep abreast of everything? Impossible.
By developing like this, you’re flying blind. Why? Because there are so many unknowns, so many questions, and so many factors you can’t control. The possibility of being tripped up by any one of them is extraordinarily high.
With continuous integration at the core of your approach, the unknown becomes known, little by little. It holds the promise of providing proper application validation.
But what is it exactly? Continuous integration is composed of 11 essential tasks. These are:
- Maintain a single-source repository
- Make your build self-testing
- Automate the build
- Everyone commits to the mainline every day
- Every commit should build the mainline on an integration machine
- Fix broken builds immediately
- Keep the build fast
- Make it transparent
- Test in a clone of the production environment
- Make it easy for anyone to get the latest executable
- Automate deployment
Of those eleven, we’ll go in-depth with the five most critical tasks.
Make your build self-testing
A self-testing process is the kernel of continuous integration. The build has tests that validate the software. No matter whether you use BDD (Behavior Driven Development), TDD (Test Driven Development), or any of the other xDD’s, testing needs to be front and center in the build process.
What’s more, the tests need to be comprehensive. A full discussion of how to test is beyond the scope of this article. Suffice it to say, self-testing builds are repeatable, scalable, and maintainable. They’ll find issues in the code long before you — or your clients and customers — will.
And it won’t matter if it’s the first time, the 10th time, or the 10,000th time, the tests will run the same way they have every other time. They’ll check the same things every time. But more importantly, they’ll flag issues you’d likely never thought of; impacts you’d never considered in modules seemingly unrelated.
Automate the build
Automating the build builds on the fact that it is self-testing. You have the tests in place, now make sure they are run every time. This is a natural complement to software validation.
You could have a very simple piece of software that requires nothing more than syncing the changed files with a remote server. On the other hand, you could have a rather intricate piece of software that needs a file sync, cache flush, database migration, and a remote service reset.
Regardless of intricacy, these steps may need to happen in a precise order and require a range of user authentication to take place. In the first example, if the build’s not automated, an essential file — say, one containing a database migration — may be overlooked. In the latter, the cache may not be flushed, making the change moot.
Either way, no matter how simple or sophisticated, the process is the same every time with an automated build. Computers are designed for automation. Humans aren’t. So put computers to work, doing the things they do best. Leave people to do the higher order tasks.
Make it transparent
The software’s tested before it’s deployed. The deployment happens the same way every time. Now we make the process transparent. This has several benefits:
- Problems are visible almost immediately.
- People can be held accountable, like for bugs that break the build or services that change their API or have a service outage.
- Guesswork is reduced.
- Accountability and uptime are increased.
As a result, the total software cost required to both maintain and extend software is reduced. We discover problems in less time, it’s easier to track them to their source, and it’s simpler to address them.
Less time needs to be spent fixing problems. More time can be devoted to building the software or improving the continuous integration process.
Test in a clone of the production environment
This highlights a challenge that has plagued web-based applications for some time. Speaking from personal experience, whether developers develop on Linux, use OSX, or Windows, they usually host on Linux.
It’s not uncommon, therefore, for the development environment to be markedly different from the production environment. Even when we develop on the same platform, we may not consider library versions, the existence of extensions, or the extensions’ versions, which can cause problems. So many things can go wrong after the application’s deployed. The tests say that it works, the build completed just like last time, yet the software unexpectedly crashes.
So it’s essential that the development and test environments are as similar to each other as possible. If not, devs risk wasting time chasing down bugs, likely in a haphazard manner, or creating extra code purely to deal with different environments. Defensive code is not necessary.
Make it easy for anyone to get the latest executable
No matter whether it’s a senior or junior developer, whether it’s a long-term employee or someone brand new to the company, getting a working build of the latest copy of the application or service should be child’s play. With continuous integration, it is.
The process for deploying to testing, staging, production, or to a new developer’s computer is clearly laid out. In addition to new developers being able to be productive more quickly, new servers can also be provisioned in less time.
I appreciate that architecture plays a key role, of course. But a proper continuous integration process should mitigate most of these issues, leading to a reduced provisioning time if nothing else.
Conclusion
There’s so much more that I could write, and truth be told, I’d love to. But let’s not get overwhelming. Continuous integration helps keep us honest. It validates our assumptions and changes. And it’s a process we can use every moment of our work day to ensure that the software we’re building is improving and not decaying.
As a result of all that, continuous integration saves us time — our most expensive and precious resource.
Reference: | Why Continuous Integration Is Important from our WCG partner Matthew Setter at the Codeship Blog blog. |