The Agile Architect

How Agile Are You? Let's Actually Measure It! (Part 1: Technical Craftmanship)

Our Agile Architect shares the first part of his Agile Assessment, focusing on technical craftsmanship.

For more articles in this series, please use the following links:

In the first part of this series,"How Agile Are You? Let's Actually Measure It! (Part 0: Introduction)," we discussed the value of measuring agile maturity along with the pitfalls of creating the illusion that this very qualitative measurement can be represented completely quantitatively.

In this section, we will dive deeper into the first area of the assessment, Technical Craftsmanship.

About Technical Craftsmanship
It is my firm belief that in order to truly be agile in software development, you must follow modern, disciplined technical practices, many of which were originally described in eXtreme Programming. The assessment below allows you to assess how well your team implements these practices. 

Note that I have previously discussed many of the technical craftmanship issues listed below in the following articles:

Below are the assesment specifics for this area. Please see the introduction article for a description of the 0-5 scoring methodology referenced. Remember that, while not shown, zero should be used when there is no capability in any particular area.

Area 1: Technical Craftmanship Scoring

A. Automated Unit Tests

  1. Some automated unit tests exist and are run ad hoc.
  2. Automated unit tests are written for all new code and are run ad hoc.
  3. Automated unit tests are run by the developers and QA with each check-in/check-out.
  4. Automated tests are integrated with the build system. When the build fails, code/tests are fixed immediately.
  5. Build failures due to automated unit test failures are analyzed on a regular basis and actions are identified.

Principles: Value Delivery, Harnessing Change, Frequent Delivery, Communication, Sustainable Pace, Technical Excellence, Simplicity


B. Non-Functional Testing (e.g., Scalability, Performance, Stress, Load…)

  1. The team performs manual non-functional testing on an ad hoc basis.
  2. The team has scripts for a small amount of non-functional testing.
  3. The team has automated tests for non-functional testing.
  4. The team has a robust testing framework for non-functional testing. Testing is built into the CI environment and is monitored for changes. Alerts are generated when results are out of the acceptable range. The team collects metrics on the effectiveness of their non-functional testing.
  5. The team uses their robust testing framework to perform experiments to improve the results of their tests. Acceptable ranges are continually pushed forward to improve performance, load, scalability, etc.

Principles: Frequent Delivery, Sustainable Pace, Technical Excellence

C. Test-Driven Development

  1. Some passing knowledge from reading books and articles. Not yet applying the knowledge.
  2. Some code is being written test-driven. Some refactoring is occurring on existing code.
  3. The team is writing most production code test-driven. Basic concepts like mocks and stubs are understood and used. The team has begun adding automated tests around legacy code.
  4. The team uses mocking frameworks regularly. Code is ruthlessly refactored. The team has no fear of change. Legacy code is under control.
  5. Advanced refactoring techniques are in use. The team is experimenting with new ways to test, code and refactor.

Principles: Value Delivery, Harnessing Change, Frequent Delivery, Empowering the Teams, Sustainable Pace, Technical Excellence, Simplicity, Self-Organization, Continuous Improvement

D. Continuous Integration

  1. Some passing knowledge from reading books and articles. Not yet applying the knowledge.
  2. The team is practicing some aspects of CI without a CI server. Developers commit, update and run tests ad hoc.
  3. The team has a CI server that compiles all code and runs all automated tests.
  4. The team uses the CI server to create production builds.
  5. The team is using CI to practice continuous delivery.

Principles: Value Delivery, Frequent Delivery, Measuring Progress, Sustainable Pace, Technical Excellence, Simplicity

E. Pair Programming

  1. Pairing occurs ad hoc, primarily for information sharing (e.g., "Come over here and tell me about…").
  2. Developers pair most of the time. They still struggle to understand how to work together effectively.
  3. Developers pair most of the time. Pairing sessions are smooth and enjoyable. Pairs change within a story sometimes.
  4. Developers pair most of the time. Rules are in place to determine when pairing is required. Developers pair even when not required. Pairs change within a story frequently.
  5. The team experiments with different ways of pairing people together. Pairing pyramid or other metrics are collected around how effective the team is pairing.

Principles: Frequent Delivery, Empowering the Teams, Communication, Sustainable Pace, Technical Excellence, Simplicity, Self-Organization, Continuous Improvement

F. Spiking

  1. The team spikes on an ad hoc basis. Spiking is not effective because the team doesn't understand basic time boxing and knowing when the spike is done.
  2. The team is attempting to spike effectively but still struggles with knowing when the spike is done. The team still uses spike code in production.
  3. The team spikes effectively, knowing how to time box a spike and how to define when it is done. The team meets after a spike to discuss the results and plan next steps. The team does not use spike code in the production system.
  4. The team measures the effectiveness of spikes, ensuring that the spike stays within its time box and does only enough to answer outstanding questions. Metrics and reflection on the spike process drive improvements to the process.
  5. The team actively experiments with new methodologies and practices for spikes and is using metrics to measure the effects.

Principles: Value Delivery, Harnessing Change, Frequent Delivery, Business and Development Collaboration, Technical Excellence, Simplicity

G. Source Control/Branching

  1. Basic source control in place. Basic branching strategy. Little to no coordination with commits. Commits may break code.
  2. Known branching strategy like stable or unstable trunk methodology in place. Only working code is committed.
  3. Main branches (dev/QA/prod) are integrated with continuous integration server automated builds.
  4. All branches are built with CI.
  5. Team experiments with different branching strategies and measures the effect on metrics.

Principles: Frequent Delivery, Measuring Progress, Sustainable Pace, Technical Excellence

H. Release Management

  1. Releases don't happen or are very ad hoc.
  2. Releases happen haphazardly. Many manual tasks are required to move releases to production.
  3. Releases happen on a regular basis through semi-automated and automated processes.
  4. The team is measuring the effectiveness of their release process, e.g., how long it takes from feature complete to release.
  5. The team actively experiments with new release methodologies and practices and is using metrics to measure the effects.

Principles: Value Delivery, Frequent Delivery, Measuring Progress, Sustainable Pace, Technical Excellence

I. Coding Standards

  1. Coding standards are minimal and are not enforced. Standards are maintained by tribal knowledge and are not defined.
  2. Minimal coding standards are defined. Minimal enforcement through code review or other manual means.
  3. Coding standards are defined. Development tools are configured to enforce the standards.
  4. Coding standards are enforced through the development tools, configuration management and/or automated build system. Team on-boarding process is in place to train new members on these standards. Team measures adherence to the standards.
  5. Team experiments with different coding standards to measure the impact on velocity and other metrics.

Principles: Empowering the Teams, Sustainable Pace, Technical Excellence, Simplicity

J. Development Process

  1. Team has a passing knowledge of good development practices.
  2. Team has a basic development process in place (e.g., kanban, sprints…) and works to follow the process.
  3. Team knows their process and has a battle rhythm.
  4. Team has a definition of Done. Most work gets Done. Team measures their progress along the process (e.g., velocity, cycle time, burnup).
  5. Team experiments with their development process (e.g., from retrospective feedback) and measures the effect.

Principles: Harnessing Change, Frequent Delivery, Empowering the Teams, Communication, Measuring Progress, Sustainable Pace, Technical Excellence, Simplicity, Self-Organization, Continuous Improvement

K. Shared Code Ownership

  1. Individual code ownership. Each developer has their fiefdom. Developers are afraid to touch code that is not theirs. The team knows there is a problem.
  2. Some shared code ownership. Developers can change code other than their own as long as they consult with the "expert."
  3. Shared code ownership. Developers can change any code. Tests or other mechanisms are in place to ensure they don't break functionality. The team is working to break down knowledge silos.
  4. Shared code ownership with no knowledge silos. Developers refactor ruthlessly. There is a very high bus factor for all parts of the code. The team measures the amount of code ownership.
  5. The team experiments with new ways to share code. They identify growing knowledge silos through metrics and take action to prevent it.

Principles: Harnessing Change, Frequent Delivery, Empowering the Teams, Communication, Sustainable Pace, Technical Excellence, Simplicity, Self-Organization, Continuous Improvement

L. Software Changeability

  1. The software is brittle. Changes to existing functionality and enhancements take a large level of effort and are highly risky.
  2. Some tests exist for some areas of the code. The team adds some tests, where they know how, for new code.
  3. Software is built through test-driven development. Tests are run regularly. Refactoring occurs on a regular basis. Most of the code is well factored. Team practices continuous integration without a CI server.
  4. The software is well factored and well tested. A continuous integration server notifies the team when the build/test process fails. The team measures how easily the code is able to embrace change. This could be through velocity, code complexity metrics, qualitative assessment or other means.
  5. New features can easily be added through refactoring of the existing code followed by small extensions. Team is measuring velocity and other metrics while experimenting with refactoring to make the code easier to work with and thus faster to add features to.

Principles: Value Delivery, Harnessing Change, Frequent Delivery, Empowering the Teams, Sustainable Pace, Technical Excellence, Simplicity

Please go to the next article in this series for the next assessment area, Quality Advocacy.