CircleCI – a continuous integration and continuous delivery platform – have released the findings from their 2022 State of Software Delivery Report. The report reveals that the most successful software delivery teams are larger, use extensive testing, and prioritise being ready to deploy.
The report’s key findings in terms of teams and culture show that the most successful teams routinely meet the four key benchmarks of success teams identified by the report: duration, mean time to recovery, success rate, and throughput. These teams prioritise being able to deploy at all times – a key marker of Continuous Delivery – rather than focusing on the number of workflows that are run. Their workflow durations tend to be between 5 and 10 minutes on average, success rates are above 90% on the default branch, and recovery from any failure takes less than an hour.
The report explains how contemporary software is increasingly stitched together by combining publicly available pre-existing libraries from across the Internet, and the magnitude of components involved in building software creates huge complexity. According to the report, most organisations mitigate this by using CI/CD. CircleCI then analyses an organisation’s success in implementing CI/CD based on four metrics:
Duration – the length of time it takes a workflow to run. Martin Fowler suggests that CI should run in ‘minutes’, and the report shows that the average duration was 12-13 minutes. The report gives tips to improve this include parallelizing tests, using Docker images designed specifically for CI, and using optimal machine size and caching strategies.
Mean Time to Recovery (MTTR) – the average time between a pipeline’s failure and its next success. Ideally, the MTTR would be zero, but CircleCI found factors causing this to be larger include end-of-year holidays, having non-comprehensive test coverage, and opaque error messages from failed runs. Tips to improve this include reducing the duration of the tests, using tooling to rapidly identify CI failures, using meaningful and traceable error reporting in tests and being able to debug directly on the machine where the workflow failed.
Success Rate – the number of passing runs divided by the total number of runs over a period of time. This may vary depending on the branching strategy used, with some organisations making heavy use of feature branches on which a lower success rate is a reflection of innovation or experimentation. The success rate on the main branch however, should be closely monitored.
Throughput – the average number of workflow runs per day. A lower throughput does not necessarily mean less change however, with the report suggesting that can be because commit sizes are larger – a trend which studies suggest is counterproductive. It highlights that measuring baseline throughput and monitoring for fluctuations is more effective than aiming for an arbitrary throughput number or one based on that of others.
A further finding is that smaller teams can suffer from lack of resources to fix broken pipelines, and therefore should prioritise test-driven development (TDD) to prevent bad code being put into production. With this being the third year in which CircleCI have published this report, the numbers show that more teams than ever before are hitting the benchmarks.
The 2022 State of Software Delivery Report is available from CircleCI’s website.