Chuniversiteit logomarkChuniversiteit.nl
The Toilet Paper

Continuous integration practices (Taylor’s version)

This study suggests that continuous integration practices are kind of like Scrum, in the sense that everyone does something different.

Pissed off Jenkins with laser eyes
People don’t always CI to eye when it comes to the implementation of best practices

Continuous integration (CI) is the practice of continuously merging and testing small changes into a main version of a codebase, so that each developer can work with an up-to-date version of the project. It comes with tools and that are designed to improve code quality and reduce the cost of integrating components that are developed by different teams or individuals.

Practices make perfect

Link

The ten core CI practices were . I’ll briefly summarise them here before we move on to the part that discusses the actual study:

  1. Maintain a single source repository: each software project should be entirely contained within a single repository. This means that you should not need any other repositories in order to build, test, or run the project. The repository should also have a mainline branch (e.g. main or master), which everyone should work off.

  2. Automate the build: after cloning the repository, a single command should be enough to build the project.

  3. Make your build self-testing: the project should include an automated test suite that can be executed as part of builds.

  4. Everyone commits to the mainline every day: changes should be integrated into the mainline every day so that merge conflicts become less likely. And when they do occur, they’re easier to resolve.

  5. Every commit should build the mainline on an integration machine: each commit should be built on an integration machine. A commit is not considered to be done until the build succeeds.

  6. Keep the build fast: builds should provide rapid feedback, in order to decrease context switching between tasks and to mitigate potential work blockages.

  7. Test in a clone of the production environment: running tests in a production-like environment allows developers to catch bugs before a new version of the system is released into the wild.

  8. Make it easy for anyone to get the latest executable: early and easy access to the built product allows users to give feedback before deployment, and enables developers to try new features and stay up-to-date with the project.

  9. Ensure that system state and changes are visible: dashboards and other types of interfaces should be used to show the system’s current state and the changes that have been made to it.

  10. Automate deployment: deployments can be made safer and faster by making it easy to deploy to different environments.

Practices in practice

Link

The ten core CI practices have been widely adopted in the software industry. Better yet: research has shown that at least some of them really do improve software quality. But little is known about how CI practices are implemented and how they are perceived (both positively and negatively) by developers.

The authors of this paper therefore conducted a multiple-case study at three (presumably Canadian) software development organisations, where they gathered data using a combination of interviews with stakeholders and an analysis of development activity logs and task management data dumps.

They found that all three organisations make use of the pull request model, where branches are eventually merged into the mainline after approval from team members. Automated builds, tests, and deployments occur at various stages of this workflow. This is where the similarities end.

Differences between organisations

Link

The three organisations adopted continuous integration for different reasons and all have . Four CI practices in particular differ greatly between the three organisations:

  • Maintain a single source repository: one of the organisations works on a single large project and thus stores most of its code in a single repository. Meanwhile, another organisation has opted for a microservice-like project separation as it doesn’t see the point of using mono repos.

  • Make your build self-testing: all three organisations run unit tests in their pull request builds and do some form of manual UI testing. End-to-end tests are run in all but one organisation, which has dropped them due to their flakiness. Moreover, two organisations prioritised fast builds, while one organisation prefers thorough testing over shorter builds.

  • Keep the build fast: build times range from one minute to a few hours, although most builds tend to take at most 10 or 20 minutes. Builds for projects that are data-centric or require thorough testing generally take longer.

  • Automate deployment: deployments are always triggered manually, but in different ways. In two organisations, deployments can only be triggered by trusted team members. In the remaining third organisation it’s the responsibility of whoever authored the change, as this likely fosters a culture of ownership.

    Deployment frequency also differs; the first two organisations deploy frequently, while the third schedules deployments once a day.

Although these findings give us some idea of how different types of organisations implement continuous integration, they are are not very interesting on their own.

What causes these differences?

Link

The large amount of variation in how continuous integration is implemented can be explained using three themes.

The first is project context, which determines how tools and practices should be prioritised and what constraints they may bring:

  • The type of project influences how organisations prioritise and value different practices. A data-centric (or safety-critical) system will likely have longer build cycles than other types of systems.

  • The testing strategy determines the relative weight that teams place on unit tests, integration tests, regression tests, and manual tests. Some types of tests are cheap, others may require a lot of time or effort to maintain and execute. Teams will only use a type of test if it adds sufficient value.

  • Differences in CI infrastructure have a direct effect on how fast and cost-efficient builds can run.

The second theme is practice perception. Each organisation implements practices in their own way (microservices! monorepos!) – not for the sake of it, but because it believes that their chosen way provides most value. While that’s fine, many CI guidelines also recommend the use of concrete measures, like cycle time, to objectively assess the value of CI practices. Curiously, none of the three organisations had implemented such measures.

The third and final theme is process constraints. One of the major benefits of using continuous integration is that it makes feedback cycles shorter and more frequent. However, some CI practices can actually have the opposite effect, e.g. when:

  • new pull requests must wait until older pull requests are finished being merged;
  • the use of feature flags (rather than long-lived branches) introduces more technical debt and maintenance overhead; or
  • developers spend a lot of their time waiting for results from various types of tests.

All in all, we can conclude that there’s no One True Way to implement continuous integration; it all depends on context!

Summary

Link
  1. There is no single good way of implementing continuous integration practices (and that’s okay)

  2. Organisations should take project context, data and process constraints into account when deciding how to implement CI