When we talk about agility, one of the questions we hear most is: where do I start? How can I diagnose my team’s agility and how do I know what my team or company should invest in to improve and become more agile?
At Knowledge21, we’ve created a really simple tool to help answer this question. As well as making your team’s agility tangible, it’s quite visual, which helps the whole organization to achieve transparency in the transformation process.
We’ve already mentioned Agile Radar here, and in this article, we’ll go a little deeper into a few criteria which can be used to achieve this.
Flexibility
Before we start, it’s important to bear in mind that none of the criteria we’re defining here is obligatory. Rather than just rigidly following a list, it’s more important you be aware that:
- Measuring too many things means you’re not focused on anything.
- It’s no use following a metric if it doesn’t help you make decisions.
- Every good metric leads to some action of improvement. If you can’t tell which action to take based on that metric, perhaps you’re measuring the wrong thing.
- Agility is a four-legged animal (Business, Culture, Organizational, and Technical). If you allow one paw to become lame, it’ll fall over. That’s why you must ensure that the metrics you’re considering are all relevant.
- It is common to discover which metric is the most important… and you’ve no idea how to start measuring it.
Agile Radar and its criteria
Business
When we talk about Business metrics, we’re talking about efficacy. That means we’re trying to measure whether the team or company is doing the right thing. Doing what brings results. For example, we can have excellent quality, with all processes automated. We can have a marvelous culture, with a high degree of collaboration and trust. We can have clear roles, high delivery speed, and nevertheless…we might still be creating a product no one uses, or which isn’t economically sustainable. These metrics are a few examples for understanding whether your team is being efficient:
- Product metrics: those which indicate whether our product is going in the right direction. Examples: Service, User Behavior, Churn, Growth in the Market, Acquisition Cost, Cost of Delay, Operation Cost, Invoicing, Market Share, Contact Channels Share, Fitness For Purpose Score (F4P), Lifetime Value (LTV), Payback, Pirate Metrics (Acquisition, Activation, Retention, Income, and Reference), Net Promoter Score (NPS), Return on Investment (RoI), Social Sense, Active Users, Sales, etc.
- ROI: return on investment. It’s important to understand that by “return” we can be many things. Income? Satisfaction? It’s important to define what will be measured as a return, as well as measured as an investment. Would this be hours of effort? Points? Cost? Provided there is a definition for both criteria, and that it’s clear to the whole team or organization, you’re measuring ROI.
- Thin slices: we’re capable of slicing small problems to resolve, to guarantee the constant delivery of value. Small pieces of the solution solve the problems for users effectively and/or invalidate important hypotheses.
- Hypothesis Tests: taking any good idea as a hypothesis to be (in)validated. It’s just as important to find out that our solution doesn’t solve an important problem, as being successful in this validation. Learning is highly valued and celebrated because it paves the way to the product’s success.
- Prioritization: we base ourselves on metrics to prioritize, and often discard items which don’t solve a critical problem or aren’t part of the objectives in focus for the product right now. We don’t have any attachments to a certain item in the backlog and the items in it have a reason to exist based on metrics and hypotheses and not opinions or personal taste.
Culture
Cultural metrics are generally those the organization finds most difficulty in adopting. We must find ways of measuring how healthy the environment is, whether we have a collective mindset of growth, are adopting continuous improvement as part of the organization’s culture, and how close we are to having a safe, open environment in which people trust each other. We have to understand whether there are healthy conflicts in a company, among people commit to the agreements reached and whether they feel responsible and protagonists in relation to the results achieved.
- Continuous improvement: we stop periodically, analyze our experiments and reflect on how to improve.
- Motivation: we’re engaged and motivated to work for the job in hand. We believe what we’re doing contributes to the world.
- Leadership: if we identify with an opportunity to contribute, we act fearlessly. We actively seek to learn collectively and offer solutions, with hands-on work to make them happen.
- Autonomy: we feel we have the power to make decisions and are responsible for the decisions we take. We seek to decide together, not to avoid being guilty, but rather to value the contribution of our colleagues in the decision making.
- Interdisciplinarity: we’re always learning new things which lead us out of our comfort zone and help us be more versatile professionals.
- Reaction to change: we see change as a welcome thing, and have a facility for adaptation. We’re not attached to solutions which aren’t the most suitable for meeting our needs.
Organizational
These are the metrics the company generally finds it easiest to demand (but not necessarily to measure). In all organizations we work with, without exception, we have to help answer the same question: “when is delivery?”. However, when we ask whether anyone is measuring the lead time – the metric often used to answer this question – generally, the answer is “No!”. So we recommend a few options for getting a vision of the organization’s efficacy and structure.
- Lead time, Cycle time: we know the time an idea takes from the proposal to build it, being implemented, and when it starts delivering value to our end client.
- WIP: we’re clear about having to stop starting and start finishing, which is why, at the process’s bottleneck points, we have an overview of our WIP and can limit it whenever necessary.
- Vision of value flow: we know the value flow of our products and processes, and act to resolve bottlenecks. The process is visible to all (e.g. Kanban on the wall), and everyone can take part and generate improvement.
- People over processes: processes are important, but they can never come before the actions of people to deliver value. That’s why we have a few rules and make our restrictions clear, leaving people free to act.
- Low hierarchization: we have few chiefs, and all act as leaders, not bosses.
- Clarity of roles and responsibilities: we know each person in the company’s role and have no problem exacting or asking for help from the right person.
Technical
The technical domain seeks quality in everything we do. We often come across the illusion that the technical is just software. For example, a legal department which is failing in its technical domain will have a lot of trouble in the quality of its contracts. Its lawyers will suffer because they don’t know how to act, and they’ll quickly come to the conclusion they need to be better prepared.
- Quality metrics: there are mechanisms which guarantee that the delivered value is sustainable in the long term. If we’re talking about software, there’ll be a suite of automated tests covering the code according to good practices. If it’s a manufacturing process, it’ll envision a check of the result if each item.
- Stop the line: when an error is found, all stop to resolve it. There is a very low tolerance of quality deficiencies.
- Evolution of knowledge: everyone in the team is always seeking new ways of improving their work, either through courses, new technologies, proof of concept, etc. There is a real focus on finding faster ways to deliver value.
- Experimentation: the structure of the product or process is designed to allow for experimentation, so the team is always able to learn.
Starting from the moment you identify the most important criteria to be analyzed, the next step is set up the radar and carry out a self-evaluation exercise with the team, where everyone can reflect on where the team is and the next step it wants to take. Leave room for everyone to contribute with their opinion and do consensus exercises to generate a common vision. Afterward, go over at least one action to be carried out to improve the criteria of the Radar.
This list isn’t exhaustive for any of the domains. It’s important to remember that the Radar adapts to the team’s reality, and should always evolve with time.
Did you enjoy this article? Do a test using Agile Radar with your team and add your comments below. 🙂