Once your team starts work, the most recurring question your stakeholders ask is likely to be “When will it be done?
Agile and its frameworks do not strictly prescribe any specific set of metrics. Values, techniques, practices and definitions can help organizations navigate different strategies and identify the best approach for each situation. Here I would like to share some principles and guidelines that have been useful to me in different projects. 

  1. It depends.
  2. KISS: Keep It Simple Stupid!
  3. If you don’t measure, it doesn’t exist.
  4. Focus on a few important metrics rather than tons of meaningless data.
  5. Metrics are not necessarily related to KPI or KPO. Measures are not the ultimate objective. Metrics are a means to quantify improvements or the need for improvements.
  6. Not all metrics are useful. Don’t waste time on copy paste numbers and graphs, powerpoints, excels, status updates, documents and meetings.
  7. Metrics -as well as feedback – must be actionable.
  8. If you don’t have a plan and process to improve your performance, all measures are useless.
  9. Systems and communities evolve all the time time. A system of measure can become obsolete.  
  10. Make your metrics automated, transparent, easy to understand. Avoid constructed metrics, calculations, formulas. Choose a tool and a process that works for you.Inspect and adapt them according to the situation, maturity, environment. 
  11. If you need to change metrics more often than twice a year, there may be something wrong. Ask yourself, are these the right metrics? Why do I need this?
  12. Don’t be uncomfortable with numbers you don’t like. Fix the systemic issue. Remove the root cause, not the symptoms. Don’t cheat. Be transparent and truthful.
  13. Measures should be reliable and consistent. Observing historical trends you should increase predictability and improve your processes.
  14. Simplicity -the art of maximizing the amount of work not done, is essential.
  15. Working software is the primary measure of progress.

Let’s start!

Performance
A successful measure of performance should have two key characteristics:

  1. It should focus on global outcomes to ensure teams aren’t pitted against each other. The classic example is rewarding developers for throughput and operations for stability. This is a key contributor to the “wall of confusion” in which development throws poor quality code over the wall to operations and the operations put in place a painful change management process as a way to inhibit change.
  2. Measure should focus on outcomes not output. It shouldn’t reward people for putting in large amounts of busy work that doesn’t actually help achieving organizational goals.

The main 5

The first 4 metrics that capture the effectiveness of the development and delivery process can be summarized in terms of throughput and stability. The State of DevOps 2019 Study measures the throughput of the software delivery process using lead time of code changes from check-in to release along with deployment frequency. Stability is measured using time to restore— the time it takes from detecting a user- impacting incident to having it remediated— and change fail rate, a measure of the quality of the release process.

SOFTWARE DEVELOPMENT

1 – Lead time: Lead time is the time it takes to go from a customer making a request to the request being satisfied. The elevation of lead time as a metric is a key element of the Lean theory.

2 – Deployment frequency: How often does an organization deploy code to production or release it to end users.

SOFTWARE DEPLOYMENT

3 – Change fail rate: For the primary application or service you work on, what percentage of changes to production or released to users result in degraded service (e.g. lead to service impairment or service outage) and subsequently require remediation (e.g. require a hotfix, rollback, fix forward, patch)

4 – Time to restore: How long does it generally take to restore service when a service incident or a defect that impacts users occurs (e.g., unplanned outage or service impairment)

The above metrics fully actualize the following Agile Values:
– Working software over comprehensive documentation
– Customer collaboration over contract negotiation
– Responding to change over following a plan


… and principles:


– Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
– Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
– Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
– Working software is the primary measure of progress.
– Simplicity–the art of maximizing the amount of work not done–is essential.

If you have any doubt around Business Agility Effectiveness, here some facts for you.
According to the “State of DevOps 2018: Strategies for a New Economy” report when compared to low performers the high performers have:

46 times more frequent code of deployments
440 times faster lead time from commit to deploy
170 times faster mean time to recover from downtime
5 times slower change failure rate (1/5 as likely for a change to fail)

If you want to collect metrics that can help you to self-evaluate your performance looking from a wider perspective these 4 metrics are for you.

5 – HAPPINESS
The 5th metric, happiness responds to other Agile principles:

– Build projects around motivated individuals. 
– Give them the environment and support they need, and trust them to get the job done.
– Agile processes promote sustainable development.

Happiness is personal growth, job satisfaction and fulfillment. Happiness is the feeling of enjoying your day.
Happy people make more money, sell more, cost less, are less likely to leave their jobs, are less likely to burn out and are healthier.
Happiness is a predictive measure of success. 
In Scrum “Happiness Metrics” derived from the work done at Toyota by Taiichi Ono.

In my previous article I already explained in detail why happiness is one of the most important organisational performance metrics and how to measure it.

Here are a few examples on how to get the Happiness metric similarly to NPS:

  1. On a scale from 1 to 10, how happy do you feel about your role in the company?
  2. On the same scale how do you feel about the company as a whole
  3. On the same scale how likely is it that you would recommend to work in this team/project to a friend?

You should aim to score 7 or more if you don’t want your organization to lose money every day. You could adopt the same concept to measure other indicators but as said the point of measuring is not to drown under a huge amount of useless data.

Scrum Metrics

A Scrum team is expected to release at least once per Sprint. 
Not necessarily in production. For instance your customer may not be ready to welcome continuous product enhancement or your R&D team is working on an innovative project with no customers yet. There are many reasons why you may not want to promote a release ready increment in production. In any case, your team should release to an environment, the closest to production as possible. Mobile applications are not an exception.

-Assuming such an environment exists and assuming the team is responsible, enabled and empowered to release in this environment (CI/CD or Validation & Verification or staging or physical device).
-Assuming the team has Definition of Ready (DoR) and a Definition of Done (DoD).

Here 2 metrics I would endorse:

1 – Time To Ready (TTR): It is the time it takes to go from creating the Product Backlog Item (PBI) relative to a Request for Feature (RFF) or Request for Change (RFC) to the time the item, fulfilling the DoR, enters in the Sprint backlog.
2 – Time To Done (TTD): It is the time it takes to go from Ready to Done.

You can measure the averages for respectively TTR and TTD. You can also use percentiles.
These measures can be useful when you have difficulties in measuring lead time or in improving it. Are you receiving enough and clear feedback from customers? Do you have issues with refinements? Do you have issues with deployments? Should you revisit DoR or DoD?

If you have Sprints you may find easier to measure:

  1. TTR Rate: how many items during a sprint shifted right from created to ready?
  2. TTD Rate: how many items have been completed during the sprint? 


– Assuming you adopt Scrum and Sprints.
– Assuming you use Fibonacci story points for estimates.
You probably used the following at the beginning or till a certain point:

  1. Burn Down and/or Burn Up Charts (not metrics but tools)
  2. Committed Story Points: the amount of story points committed by the team in each iteration.
  3. Completed Story Points (Team Velocity): the amount of story points completed by the team in each iteration. Unfinished stories are not counted, not even partially and are not re-estimated.
  4. Added/Removed Story Points: the amount of added or removed story points from the sprint backlog. You may want to look at the percentage of changes of sprint scope. Is it good? Bad? It depends. Scrum Master can help interpreting data.Excessive or recurring changes that endanger the Sprint Goal are often bad. A team that welcomes and embraces requests for changes in a disciplined way demonstrates openness, braveness, commitment, confidence. It qualifies as high performing.
  5. Carried items from previous sprint: the amount of story points carried over. This should really tend to zero after a few iterations. 

If you don’t use story points, you can count the number of items. This can be useful when the team for instance uses story points only for stories but not for tasks, bugs, spikes. If your backlog management tool allows categorization, you can break down the type of issues, for instance among stories, tasks, spikes, bugs.

The committed/completed rate provides historical trends and evident measures of progress or performance issues.Completed Story Points (or item) per Sprint provide Team Velocity and Delivery. Each team should know exactly how much work they can get done in each sprint.

Delivery = Velocity x Time.

Once you know your speed you know how soon you’ll get there.

Bugs

Whether the product is released in production or you have test engineers in your team or other QA process in place it is important to monitor defects.
– Number of New Bugs Opened/Accepted/During Sprint
– Number of Bugs Fixed during the Sprint.
– Number of Bugs Re-opened during the Sprint.

Flow & Kanban

If your organization is not yet mature enough from the cultural perspective to use the first 5 metrics, agile still provides some good support.
From the business perspective we want to maximize the flow of business value delivered in a certain interval of time (iteration).
To monitor and maximise the flow you could use:

  1. Cycle Time: The elapsed time, start to finish, for a work item. You want to minimise this.
  2. Work In Progress (WIP): The number of started, unfinished, work items. You want to minimise this to avoid waste.
  3. Throughput: The number of items finished in a period of time. You want to maximise this if your items are really the most valuable ones.
  4. Work Item Age: The elapsed time since a work item started. You want to minimise this.

If your team is adopting Kanban, the first 3 metrics can help measure your team performance. Cycle time, throughput, and WIP are connected by Little’s Law. This formula applies to any system which meets Little’s Law assumptions.


Cycle Time = WIP/Throughput

The above metrics are useful to intercept some Kanban issues or antipatterns and help your team to inspect and adapt.For example, if you are already limiting the amount of WIP and the team is very disciplined, keeping track of the WIP may not be of any use if there is nothing more to improve there.

Acceptance

The product owner is responsible for the Acceptance Gate. Is what the team claims to be done really done? The Sprint review is the moment where customer, PO and stakeholder provide feedback. PO is responsible for accepting or rejecting.

  1. Was the team expected and enabled to release? If yes, did the release happen? Yes/No
  2. Was the Sprint Goal achieved? Yes/No

If your team doesn’t collect streaks, there is something critical to fix. It could be a problem of communication with stakeholders, lack of clarity from the PO or lack of skills/tools from Devs, absence of vision, insufficient customer management, unsecure roadmap, lack of trust, team dysfunctions, or many other things. 

Blockers

I do not advocate monitoring blockers for a few reasons. Everyone would better dedicate time to remove impediments and anticipate blockers. Relentlessly and as soon as possible. Critical and recurring difficulties should be tracked and discussed during retrospectives. 
How many items are blocked? How many times has the same item been unlocked and blocked again? For how long an item stayed blocked? How to measure the impact or delay due to bottlenecks or miscategorised blockers? How to quantify the ripple effect of an impediment or bad policy?
Blockers can be caused by bad culture, silos, inefficient management of people, money, time, tools, resources.Anything that prevents the team from being a happy, successful, high performing team is an impediment. In some workplaces stupid policies rule. Do not encourage people adding bureaucracy on top of impediments. Act on the root cause rather than on the effect.

Conclusion


If your organization is not willing to adapt after inspection, there is no point in collecting metrics. It is just another ticking boxes exercise to waste time and money.
Focus on what really matters. Few actionable metrics in conjunction with growth mindset, agile values and practices can open highways to organizational performance

There is a famous Dilbert comic strip where a customer service agent answers the phone saying “Hello, this is tech support. May I close your ticket now?”.
His KPI or metrics are based on how quickly he can close a ticket. No matter if it is solved. No matter the logic. No matter the quality. No matter the customer satisfaction. Closing tickets without solving the issue is nonsense. What matters is to listen, understand and solve the issue so that it doesn’t happen again. It is paramount to have actionable metrics. The rest is waste. Remove the waste as soon as you can. It will smell and make everything dirty.

Measure only what you need. An excess of KPI creates deviating and artificious behaviours more focused on showing and tracking nonsense.

That’s why after all,

“Working software is the primary measure of progress”.

* * *

I write about organizational patterns, transformational leadership, healthy businesses, high-performing teams, future of workplace, culture, mindset, biases and more. My focus is in leading, training, and coaching teams and organizations in improving their agile adoption. Articles are the result of my ideas, studies, reading, research, courses, and learning. 
The postings on this site and any social profile are my own and do not represent or relate to the postings, strategies, opinions, events, situations of any current or former employer.

This article has been published for the first time on danieledavi.com by the author Daniele Davi’.
© Daniele Davi’, 2021. No part of this article or the materials available through this website may be copied, photocopied, reproduced, translated, distributed, transmitted, displayed, published, broadcast or reduced to any electronic medium, human or machine-readable form, in whole or in part, without prior written consent of the author, Daniele Davi’.