Diving into DevOps
DevOps Intelligence Gathering: What Are We Measuring Here?
- By John K. Waters
There's a business adage that goes, "We manage what we measure," (or some variation thereof) -- which is important insight, but easier said than done when it comes to DevOps. Moreover, says Tim Buntel, DevOps Advocate at XebiaLabs, it's downright tricky.
"If you think about, say, a manufacturing process, where you have specific outputs -- products like cars or cell phones -- the metrics are fairly straightforward," Buntel said. "But measuring software development and delivery in general can be difficult. Now throw in the large cultural components that are part of the DevOps model. We hear from so many organizations that are making big investments -- adding tools, doing reorganizations, hiring people -- and suddenly they find themselves asking, how do we know if the efforts we're putting into DevOps are paying off? How can we measure and report on those efforts in a way that supports those initiatives?"
Measurements and metrics in DevOps is a subject that gets Buntel's blood going. "I love this stuff!" he told me during a recent interview. His current role at XebiaLabs has him working with companies to find efficient ways to measure and report on their software build and delivery processes, as well as identifying the flaws in previous standards for measuring performance.
XebiaLabs was in the DevOps space before it had a label, Buntel said. The company bills its flagship DevOps Platform as "the backbone for DevOps release automation." Its XL Impact tool, released last year, is a goal-based DevOps intelligence solution that combines best practices with historical analysis, machine learning, and data from across an organization's tool chain to show trends, predict outcomes, and recommend actions.
"I think the reason it's exciting and interesting to have these conversations now around DevOps intelligence gathering is that DevOps adoption in the enterprise is full-steam-ahead," Buntel said. "Lots of organizations understand that finding a way to build and deliver their software at scale is essential to their success, no matter what their business is. Now they're wondering how to measure that success."
Buntel advises companies to focus on a combination of global measurements (traditional metrics from across the organization) and outcomes (delivery of software with speed and stability). "If you combine these factors, that's when you start to build a meaningful capability around DevOps measurements," he said.
Buntel shared four key global measurements developed over the past few years by XebiaLabs from customer feedback and the efforts of San Francisco-based DevOps Research and Assessment (DORA):
"A lot of companies ask us, is the outcome we want daily releases or hourly releases?" Buntel said. "And we hear a lot about continuous deployment in this sort of Netflix world. But you don't want to focus on achieving some sort of Nirvana of five-minute releases. Instead, you want to make sure you can deploy software often to production, but only pull the trigger when the business needs it, whether that's every day or once a month. You want to be able to say, we are always able to deploy code to production when it makes sense from a business perspective."
Lead Time for Changes
This is literally a measure of the time from when the developer commits the code to when it gets deployed and is accessible," Buntel explained. "Everyone tends to agree with this one as a measure, because it's typically the one that is the most automated and predictable. In a good DevOps, CI/CD pipeline environment, you are automating as much as possible. If you have manual process in place that requires a human being to get involved between the code commitment and deployment, that's a place where you can automate and add efficiencies. This is a nice measure to look at longitudinally."
Mean Time to Recovery
"Lots of organizations tell us they look at meantime to failure as a key measurement, but that's a terrible one. It stifles innovation, and it doesn't speak to the magnitude of a failure. It's much more useful in a DevOps culture, in which you're responding quickly to feedback, experimenting, and finding ways to add value, to accept occasional failure as a good thing and focus on how quickly you can bounce back when something breaks or doesn't work. Mean time to recovery is a measure that can be very liberating. If I know that we can recover from a problem quickly, I won't be afraid to try lots of things. That kind of thinking will keep people open and innovative."
Change Failure Rate
"We want to be able to bring changes into production," Buntel said, "and add value for customers, but we also want to make sure that the changes have been thought through and the software has been tested well and cause a service incident."
These four measurements provide a solid picture of an organization's process, Buntel said, as long as they're global measures. "I want to emphasize that you shouldn't look at these measures locally or individually, but across the organization, and that you're not pitting teams against each other," he said. "Pressure to hit metrics, rivalries; these are signs that you're focusing on bad measures."
What other bad measures should organizations avoid? Topping Buntel's bad measures list:
Lines of Code
"Lots of people measure this, but it's another terrible one," he said. "Tim is shipping thousands of lines of code, so he must be really great -- or maybe he's just creating bloated software that'll be hard to maintain in the future. Okay, so less is better, right? Sure, unless it leads to cryptic code that only Tim can read. Lines of code is a measure that doesn't speak to anything meaningful in terms of outcomes. Instead, focus on the business problem you're trying to solve, and say to developers, do it with the most efficient code."
"This is another measurement that leads to developers not working well together," he said. "Velocity is really much more about capacity planning. It comes from the technique in Agile to figure out what we think the team should be able to deliver in a sprint. In DevOps it doesn't work well, because it's relative. You have some teams working on the backend and some working on the front end. Some are working on microservices, and some on APIs. These things differ significantly in terms of how they define velocity. Also, this is where we see developers trying to game the system. If they know that their DevOps success is going to be measured based on velocity, you can be sure that those developers are going to inflate their estimates. It can also end up with competitive situations in which you see teams comparing velocities, instead of focusing on the global needs and how to work together."
Buntel (who blogs here) teamed with Dr. Nicole Forsgren, CEO and Chief Scientist at DORA, earlier this year to present a webinar focused on this topic.
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at firstname.lastname@example.org.