Agile Education Case Study
Improve Prioritization and Performance: Using Aggregated Velocity Data in Scrum@Scale
This case study explores how aggregated velocity data was used to improve the performance, prioritization, and predictability of engineering teams in a scaling tech startup. By analyzing key areas such as linearity, stability, focus factor, and the impact of deadlines, the study reveals how team velocity trends changed as the company expanded from 10 co-located members to 25 members across multiple locations. The findings demonstrate that while team size grew, velocity slowed due to inter-team dependencies and declining discipline in Scrum and Agile practices, but data aggregation helped prioritize tasks and manage the product backlog effectively. Ultimately, this approach allowed for better decision-making, balancing innovation with the demands of deadlines and long-term sustainability.
CASE STUDY SNAPSHOT
Trainer Name: Rob Frohman
Industry: Software Development
Organization Size: Small
Topic: Agile Practice, Delivery and Velocity, Distributed Teams, Driving Innovation and Creativity, Prioritization
Date: 2020
Website: https://www.co8group.com/
LinkedIn: https://www.linkedin.com/in/robertfrohman
Case Study
Summary: Improve Prioritization and Performance: Using Aggregated Velocity Data in Scrum@Scale
This case study follows Agile coach Rob Frohman on his journey working with a lean startup inside a large technology company. Within the first couple of years, the team scaled from 10 co-located members to 25 members across three geographic locations, giving an opportunity to observe how a team scales over time. Measuring linearity, stability, focus factor, and impact of deadlines over a 2.5 year period provided valuable insights into how velocity data can be aggregated across teams to improve prioritization and performance, ultimately leading to better decision-making, transparency, and predictability.
Overcoming Challenges
As the engineering teams expanded and became distributed across different locations, they struggled to maintain consistent velocity and focus. Initially, the teams exceeded performance expectations. Over time, Scrum and Agile practices became less disciplined, deadlines added increased pressure, and team interdependencies grew, all of which negatively affected productivity. Furthermore, the teams faced difficulties in balancing product backlog work with infrastructure and defect resolutions. The challenge was to aggregate data across teams to better understand their collective velocity trends, make informed decisions, and improve overall performance.
Key Measurement Areas
Linearity
A common initial hypothesis is that increasing team size will directly increase productivity. However, data collected showed that this was not the case. While the teams outperformed expectations early on, the velocity decreased over time. Ultimately, key reasons included greater dependencies between teams, looming deadlines, and a decline in practice rigor. Unquestionably, the expectation that doubling team size would double work output was debunked.
Stability
Rob knew that individual teams’ velocities may vary upwards of 50% sprint-to-sprint. He was curious to see what this data showed from a larger group perspective. Accordingly, he aggregated velocities from five teams over several quarters to assess their stability. Over 7 quarters, the teams consistently achieved around 1,250 story points per quarter with only a 10% variation on average. Straightaway, this data proved crucial in more long-term planning. If, while planning for the next quarter, the prioritized backlog contained 1,800 story points, they could confidently argue that it was unachievable within the next quarter based on historical data. This dataset helped make that case and empowered them to switch the focus to negotiating priorities.
Focus Factor
Focus factor measured the allocation of team capacity between product backlog, infrastructure work, and defect resolution. Although 50% of team capacity went to the product backlog, the other 50% was divided between infrastructure and defect resolution. The team’s velocity fluctuated significantly Sprint-to-Sprint, a metric that was highlighted every week during their operational review. Rather than seeing this variability as an issue, Rob viewed it as an indicator of flow and innovation. This team in particular was doing some very innovative work, and they wanted to see the teams push the envelope and “blow up” about once every six Sprints. Rather than striving for a 0% variability rate, they were looking to fall more in line with a 15% variability between Sprints.
Impact of Deadlines
Deadlines had a noticeable impact on team velocity. As major release dates approached, team velocity increased by up to 50%. Post-release, a dip occurred, and it took an average of six sprints for velocity to return to normal levels. This inconsistency in performance affected predictability and contributed to team burnout. Understanding these peaks and valleys helped Rob and the leadership team plan better and manage expectations.
Key Takeaways
- Improved Predictability: Aggregating velocity data allowed teams to predict their capacity more accurately. Over 6-7 quarters, they maintained an average velocity of 1,250 story points with a 10% margin of error. This enabled better planning and prioritization.
- Data-Driven Decision-Making: By using historical velocity data, teams could make stronger cases for adjusting the product backlog and setting more realistic targets. The data set became an essential tool for negotiating and ensuring focus on the most important tasks.
- Focus on Flow, Not Perfection: Rather than striving for zero variability, the teams accepted some level of unpredictability as part of the innovation process. This approach allowed them to balance product development with necessary infrastructure and defect resolution work, ultimately leading to more sustainable progress.
- Understanding Deadline Impact: Teams learned to anticipate the effects of upcoming releases, adjusting their expectations and resource allocation to avoid burnout and maintain long-term productivity.
Conclusion
This case study highlights the value of aggregated velocity data in managing and scaling engineering teams. By focusing on key metrics like linearity, stability, focus factor, and impact of deadlines, teams were able to improve prioritization and performance. Rob’s experience underscores the importance of using velocity data as a tool for transparency and decision-making, rather than as a rigid goal to achieve. This approach enabled teams to maintain innovation, meet deadlines, and reduce burnout, offering a blueprint for success in a Scrum@Scale environment.
About Rob Frohman
Rob Frohman is an enterprise Agile coach and practitioner specializing in organizational transformation. He has broad experience in software and product development, from teams, to start-ups, to the enterprise level. For Rob, it’s about finding the pragmatic balance between people and process to develop elegant and valuable business solutions, while having a little fun at the same time.
More Scrum@Scale Case Studies
Enhancing Agile Workflows with Shorter Planning Cycles
Remote Startup Success: From Firefighting to Results
Agile Education Case Study Remote Startup Success: From Firefighting to Results This case study focuses on a remote startup that faced challenges with disorganized workflows, team burnout, and a lack of a clear product vision. The startup had no Product Owner, leading...