In this article we shall focus on what we track at Comarch for projects based on the Scrum methodology; how we track it, and why. We’re going to look at the most important metrics in relation to three project types: the implementation, maintenance and system development ones.
Factors that can be tracked in a project are usually divided into several categories:
- work progress and planning
- quality of deliverables
- costs and income
Some of them let you plan the work well, others are used for improving the process and detecting problems along the way.
Work progress and planning:
- story points burnt – a basic metric for the team; should be checked every day during the Daily Scrum meeting. It shows the work progress over time based on the number of story points or reported issues. With this metric, the team can see if the work is going according to plan, what tasks take more time than expected, or if there’s something new to be done "on the fly".
- velocity chart – shows the number of completed story points in recent sprints, along with information on what the team planned to do at the beginning of a sprint. With the chart, you can better plan your sprints based on historical data. After the team speed has stabilized, any deviation from the norm should raise an orange flag. Then you have to find the cause with other metrics, or during a retrospective meeting.
- estimated version completion date – based on team speed and backlog estimates, you can calculate the expected task completion date in a given version. It’s best to present such a date in the form of a graph (Version Report) and show it during the Sprint Review. If necessary, you can change priorities or reduce the version scope.
- story points for user stories that meet the ‘definition of ready’ (DoR) criteria – this information is useful when determining how much work we have to do in sprints. Comparing this with team speed, you can see right away whether some of the work should be spent on new issues and whether the Product Owner generates new ideas quickly enough. It may also turn out that you’re pushing too far the other way, and have tasks that meet the DoR criteria six months in advance, which is by no means a good sign.
- team capacity – team availability in a sprint, used when planning a sprint. It can be presented as a % of full capacity or as a number of man-days. Based on team capacity and speed, the number of story points that the team will plan for a sprint is determined.
- predictability – % of the plan in terms of the story points implemented. Large positive and negative deviations are not desirable in a project. One of the main reasons for the deviations is that the team takes too big stories into the sprint and some of them move along to the next sprint. Then you have to look at the process, starting with the backlog.
Quality of deliverables:
- new bugs per timeframe – basic statistics when it comes to checking artifact quality. You can also try to keep track of the number of bugs in comparison with the number of story points burned by the team while building a feature. If go-live dates are cyclical, you can also check if there are any dependencies between the number of bugs reported and go-live times.
- bugs fixed per timeframe – this metric works best if compared with the previous one; it's about how many bugs the team fixed and how much time these fixes took.
- man-days spent on bug fixes – this metric will quickly confirm or deny that e.g. a lower team speed is due to a higher number of bugs reported by the customer. Useful when you have a separate maintenance agreement for user acceptance (UAT) and production tests.
- bug density – number of bugs vs. number of man-days for development. A metric showing in a very clear way the quality of an artifact delivered to the customer. You can track bugs for the entire project period, individual product versions, or for a single Epic. With more detailed data - such as bug density per Epic - you can analyze what caused the poor quality of a feature - misunderstanding of customer requirements, less detailed tests, or perhaps technical debt in a given part of the system. A great topic for a retrospective meeting!
- number of code lines vs. number of unit tests – we often measure code coverage by unit tests; you can also measure how the code grows with respect to unit tests. An interesting metric proposed by one of the teams.
Costs and income:
- planned vs. actual cost – typical development contracts can be settled by time&materials or by fixed price. In both cases, it’s worthwhile to keep track of the planned cost in man-days; this is particularly important in the latter case. Here, too, deviations are not desirable. This metric is of particular interest to the management and/or the Product Owner. The metric must also be used for implementations.
- income per employee – income for the company per team member when the cost of a man-day is known. A valuable metric for the management team.
- Retro Process Improvement – in this metric we check how many improvements have been reported vs. implemented.
- number of "urgents" in a sprint – the number of tickets that went into a sprint and were not planned earlier. If a large number of urgents are in place, the matter should be analyzed with Product Owner.
- customer satisfaction – customer satisfaction survey, e.g. in the form of a questionnaire. It should preferably not be comprised of 200 questions....
- number of go-lives per timeframe – e.g. during a quarter or a year. The fact that we deliver something to a customer every two weeks does not mean that what we deliver is ready for a go-live.
In our daily work, we use several tools for automatic or manual data processing. These are the most common ones:
- JIRA Agile plugin
- Microsoft Excel
JIRA Agile plugin
To prepare a work backlog, you can use Jira and the Agile plugin – the latter will come up with a lot of different charts automatically based on the daily work of the team. Based on user stories priced in story points, the tool can produce: Burndown Charts, Sprint Reports, Velocity Charts, Version Reports, Cumulative Flow Diagrams and several other indicators. If Jira is also used to report errors on client tests, the range of metrics is automatically extended by: Created vs. Resolved Issues Report or Resolution Time Report. Thanks to all these tools we can keep track of:
- burnt story points per sprint (Burndown Chart or Sprint Report)
- team speed (Velocity Chart)
- estimated version completion date (Version Report)
- new vs. fixed bugs per timeframe (Created vs. Resolved Issues Report)
- ticket closing time (Resolution Time Report)
Confluence can be integrated into Jira in a very simple way; this increases the ability to measure project results. Confluence also has a large plugin database, which allows you to prepare interesting data sets. In our projects we use this system mostly for:
- making a product roadmap – its review helps us determine what to start dealing with first in the near future. This translates into new user stories in the backlog.
- tracking the number of stories that meet the DoR criteria – we do it based on Jira tasks; we can check very quickly if we have enough tasks for the next sprint. Support for pivot tables in Confluence makes it very easy to do so.
- monitoring the number of man-days ordered by the customer – customers often order a pool of man-days for the whole year, which makes it possible to plan the annual budget. In such situations, it is good to know how many days we have already used up, so that the customer is aware of when the pool will run out, or that we will not be able to use the pool within specified time.
The ever-green Excel will help you where other tools can't. We use and recommend it for checking:
- number of new bugs per timeframe
- number of corrected bugs per timeframe
- number of man-days spent on bug fixing
- bug density
- planned vs. actual cost
- number of code lines vs. number of unit tests
- number and status of upgrades based on retrospective meetings
The data from Excel can be presented in a clear way during a presentation, which on one hand adds work, but on the other – makes it easier for listeners to draw conclusions.
Some of the metrics above are prepared automatically, other are prepared e.g. by the Scrum Master. At the end of a sprint, during the retrospective, you should review the main metrics. Remember, however, not to flood the team with a million Excel tabs, because such a review will not lead to any conclusions and improvements. In some cases, it’s enough to provide the current value in relation to the previous one, and in other ones – all it takes is to indicate what the trend is. If something can be presented visually, it should be – for the sake of readability.
Sprint use in various project types
We can single out three basic project types that we run:
- implementation – most often the implementation of an MVP, i.e. a basic product scope enabling the customer to run their business.
- maintenance – fixing production bugs; run most often in connection with the implementation project (if the implementation is divided into several phases) or with product development.
- development – when the product has already been implemented and the go-live is underway. In most cases run in parallel with maintenance.
- implementation agreement: all of the following:
- work progress and planning
- quality of deliverables
- costs and income
- maintenance agreement: in this case, it’s better to use Kanban than Scrum and keep track of the following:
- quality of deliverables
- bug fixing cost vs. revenues
- development agreement: same as implementation
Scrum is based on an empirical approach. This means the actions and decisions of a team are based on the experience gained. In agile methodologies, tracking both individual project parameters and the software development process are very important. You can see it when you look at the three pillars of Scrum:
- Transparency – all important aspects of the process must be visible to those responsible for achieving results. This also applies to the metrics that will help the team and Product Owner to make decisions related to the project. It also helps to separate the team from the management who will have all the necessary information about the current work status at their fingertips.
- Inspection – scrum users must frequently inspect scrum artifacts and Sprint Goal progress to identify unwanted discrepancies. During the retrospective meeting, we look at what happened in the previous sprint and how our improvements have affected the process. We won't be able to check this if we don't monitor the key indicators. Monitor hard data such as team speed, number of reported bugs, and the way you work.
- Adaptation – if the inspector determines that one or more aspects of the process exceed acceptable limits, and that the product is not acceptable, the process or material being processed must be corrected. Adaptation also means continuously improving the way the team works and adapting to changing conditions: changes in priorities, technology or changes in the team. The team is not able to improve its work if the two previous pillars are not strong.
The conclusion is simple: without continuous tracking of project and process parameters, there is no Scrum. Without information about speed in previous sprints and team capacity, we won't be able to plan the next sprint well.
It may be so that your projects are run in agile, but your company organization and structure are not 100% ready for it. You still have to prepare project status reports and often present them to your supervisors. Proper tracking of project parameters can make it much easier for you – plus make the reports prepare...practically by themselves.
Find more about Scrum methodology implementation in Comarch Factoring software.
Michał Buba, Project Manager, Comarch