Delivery Cycle Retrospectives
Treno can be used to review the delivery efficiency following each delivery cycle (sprint). This is primarily done through reviewing key performance indicators - that show how well the delivery cycle was completed, but also gives insights into process effectiveness.
Post every delivery cycle
A. Delivery Retrospectives
Select the desired project
Navigate to the Analyze/Delivery page
Review the following metrics (review retrospective board)
it is recommended that you create a retrospective board that contains the listed metrics
Select the desired sprint
Evaluate delivery using:
Sprint Completion %
Innovation Issues Resolved
Tech Debt Resolved
Active Contributor %
Bug Resolution Ratio
Error Resolution Ratio
Issue Points Resolved (if story points are utilized for estimation)
Coding Efficiency (not available for multi project repositories)
Review Coverage (not available for multi project repositories)
Issue Duration & Breakdown
Cycle Time & Breakdown
Share the Retrospective board/report with project team members and
What to look for?
Look for attainment in these metrics and for consistency over time
Sprint Completion: < 60% is indicative of an overloaded sprint or delivery issues such as unplanned work - regardless digging deeper is merited
If sprint completion % is consistently lower than 60%, this is an issue with assigning too much work or not breaking down that work into smaller bite sized development projects
Issues Resolved: over time there will be consistency in the amount of issues completed per sprint. If this number falls below the baseline it merits investigation, additionally whomever assigns tasks should be aware of the norm for issue completion
Similarly, if story points are used - sharing the number of Issue points resolved to the task assigner will help create better delivery cycles
It is also important to understand the % of resolved issues that were innovation vs bugs vs tech debt. Ideally, most organization shoot for at least 70% of innovation related work
Deployments: the goal of the software delivery organization is to deploy code Unless you are focusing solely on mobile applications, an elite organization deploys multiple times a day.
low sprint completion % may result in lower than optimal deployments
If deployments are less than multiple times a day, a review of QA and devops process is merited - assuming that code is available to deploy.
Active Contributor %: a low contributor % means that the entire team is not helping to pull the load. After factoring out vacation time, more than 60% of the team should be active and making commitments in the repository.
Bug / Error Resolution Ratio: creating more bugs or errors than are resolved will eventually result in more production errors or a lower percentage of resolved innovation issues
creating bugs / errors can be indicative of either an engineer in trouble, overloaded or a less than effective review or QA process
Coding Efficiency: coding efficiency is not available at the project level if multiple projects are housed in the same repository. If this is the case, review code efficiency at the rollup (workspace) level.
A low coding efficiency indicates either engineers that are overwhelmed or unclear stories. Either way deeper analysis of why multiple changes per code review is warranted.
Review Coverage: the % of code that is reviewed per delivery cycle is important to maintain quality standards. Some organizations lower code coverage to increase speed, but this usually results in increased errors. Maintaining a level of >= 70% of code coverage is highly recommended.
Issue Duration / Cycle Time: steady increases in issue duration and cycle time are indicative of process issues. Review the breakdown of both of these metrics to understand where process issues exist
What to do?
Normalize using these numbers across team to evaluate performance and as the basis for fostering improvement
Share a project retrospective report with the team at the end of every cycle
If Sprint Completion is low for multiple projects, the most likely culprits are: overloaded sprints, low contributor counts, larger than normal issues (review PR size to check) and.or bottlenecks in the review, QA or release processes.