The Agile Architect

Agile Deadline Ahead: Calculating Velocity

This month our Agile Architect tackles how you can answer perhaps the most important question of any development project: Are we going to make our deadline?

See if this sounds familiar: You're on a tight schedule to hit a hard deadline. Slipping the schedule is not an option. The team is working feverishly and functionality is being churned out at a rapid pace. You look at your Gantt charts, project estimates and work breakdown structure created at the project inception six months ago and they all say that everything is fine. Yet despite all of this, you've still got this nagging internal voice asking, "Are we gonna make it?"

What's missing? Why are you still uncomfortable? The answer is very simple. Despite how well the team has performed in the past, if you're asking yourself that question, then you don't really know how fast the team is going right now nor do you know with confidence how much work is left to do.

Now, it's easy for me to sit back in my comfortable chair while writing this column and concoct some horror story notion of project planning gone wrong. But let's not do that. Instead, let's talk about a traditional, well-managed team:

Project plans were created at the beginning of the project. The team was staffed appropriately. Tasks were created with dependencies mapped out and each task was assigned to the appropriate team members. The Gantt chart shows when the tasks should start and finish and we can track when the schedule slips. New tasks are discovered, of course, and they get estimated and inserted into the schedule appropriately. You've got developers working on the user interface, others building the database and all the pieces in the middle. And as long as everything works according to the project plan, all of these pieces will come together to create a cohesive product. So why worry?

I used to work like this in my days as a software developer for a major medical software company. Not only did we spend a lot of time planning and tracking the project but we also had a tremendous amount of data about past projects. We claimed that based on the data collected for past projects that we could estimate only the actual development part of the project (i.e. not QA, release or distribution activities) and predict with 95 percent accuracy the entire project timeline. And yet, we were almost always late with our releases.

Of course, I was always on time with my part. And, therein lies the problem. We all built our pieces; we all estimated our pieces; we were all on track with our pieces; but what about all the cracks in between? What about the unknowns -- the things we hadn't thought about, hadn't estimated and had pushed off toward the end of the development process?

So back to the question of how fast the team is going. In the scenario above, you can't effectively answer that question because you don't know how much work is being pushed off for later that is required to get your "finished" functionality out the door. In other words, your tasks may be done, but you don't know if you are really "done done," meaning that there is no other work required in order to release the functionality.

In an agile environment, we don't work like this. Rather than break the work down into relatively course-grained tasks, measured in days or weeks, that address one or more specific parts of the system (e.g. the database or the user interface), we break the release down into small stories measured in hours and certainly not more than a couple of days.

A story is a thin vertical slice of the system that provides value to the end user. Stories typically start from the user interface and reach all the way down to the database (or whatever is the lowest layer of the system). The various layers of the system are built up over time through a series of stories. Refactoring allows us to optimize the layers as we go, adding good design as we create more functionality and more value for our users. And our automated tests keep us from breaking anything as we go. Stories are finished when they are "done done" and not before. Once a story is completed it can be released with no hidden tasks or work that must be performed later.

But that still doesn't answer the question, "How fast are we going?"

The key is that each story explicitly defines what it means to be "done done."  Each story is estimated by the team in relative size to other stories. We don't need to estimate in real hours. We don't even need to track how many real hours a story took to complete, although that's useful for other purposes. But that "done done" definition must be included.

Our metric is very simple. In eXtreme Programming it's called velocity. Velocity is measured by how much estimated work we complete in a fixed period of time. For simplicity, let's say that we are working on a one-week cycle. (XP uses the term iterations. Scrum uses the term sprint.) Let's also say that we are so good at breaking down functionality into small stories that the team agrees that they are all the same size in terms of level of effort. In this simple scenario, our velocity is the number of stories we complete each week. If we are completing five stories a week and we have 25 stories remaining, then we have five more weeks of work ahead of us, end of story. (Ouch! Bad pun, intended.)

It's not always easy to same-size every story. There are a lot of different ways to measure relative size. Different teams can pick which way works best for them. Some teams use story points, where the points represent an arbitrary scale. Others use t-shirt sizes (small, medium, large...) where a medium story may be double the size of a small story. On my team, we talk about "perfect pair" hours. A perfect pair hour is an hour where there are no interruptions and the pair is "in the zone." We started using perfect pair hours because we thought it gave us some grounding in reality. After four  years of working this way, we really don't even think about hours. We just know based on the story what value to assign it. So we are really working in arbitrary story points, where the team has a shared understanding of how much work can get done in a story point.

Regardless of how you choose to measure your velocity, it's easy to calculate at the end of iteration. Simply add up the story points that were completed. This allows you to extrapolate your completion date for your project. By tracking velocity over multiple iterations, you can see when the team is going slower and take corrective action. Or maybe the team is going faster and you can find out why to exploit it. Team velocity is always a great retrospective topic. How do we go faster? Why did we slow down this iteration? Why is our velocity inconsistent from iteration to iteration?

The discipline of writing stories and implementing them to be "done done" means that the software is always in a deliverable state with no loose ends. By estimating relative story sizes and continuously measuring velocity, the team can predict the project completion date based on story estimates or conversely, determine how much functionality can be completed by a given deadline. In either case, business decisions about story priorities and product release dates can be made by the business based on meaningful, quantitative metrics.

Now I know what you are thinking. You're thinking to yourself, "Mark, you live in a fool's paradise. Of course in this perfect little world you've concocted, you can convince yourself that this all works. But I live in the real world. My project has all kinds of special cases and reasons why this just won't work. "

First, let me thank you for your candid honesty. I appreciate the question and understand your concerns. Let me just say this: At my company, we are working on all varieties of projects from your basic Web application to enterprise-scale systems, to military software where lives are literally at stake.  And we do all of the above successfully. It takes effort and discipline to follow these practices, but the effort pays off in terms of real predictability and the opportunity to optimize how the team works based on real, quantitative feedback.

Next Time…
Of course, there are many other metrics that can be measured on an agile project. We'll discuss them in future columns. However, next time we'll focus on the key to this process: Being able to break a project down into small stories that define "done done." This is as much an art as it is a science, but there are techniques and disciplines that make the job easier. Stay tuned!

About the Author

Dr. Mark Balbes is Chief Technology Officer at Docuverus. He received his Ph.D. in Nuclear Physics from Duke University in 1992, then continued his research in nuclear astrophysics at Ohio State University. Dr. Balbes has worked in the industrial sector since 1995 applying his scientific expertise to the disciplines of software development. He has led teams as small as a few software developers to as large as a multi-national Engineering department with development centers in the U.S., Canada, and India. Whether serving as product manager, chief scientist, or chief architect, he provides both technical and thought leadership around Agile development, Agile architecture, and Agile project management principles.