Scrum teaches working at a sustainable pace within sprints to increase predictability. Part of this is to manage risk while progressing to a Product Goal using empiricism (learning by doing). A common challenge teams face is to effectively forecast progress to a goal when working within a complex environment.
It’s important to caveat this post with a reminder that although forecasting progress to a goal can be useful, it doesn’t replace empiricism and the 3 pillars of Scrum: Transparency, Inspection, and Adaptation. This is because things change when working in complex environments. Therefore, forecast if you choose to, but don’t be bound to your plan.
3 recent examples of forecasting IRL
Recently, a goal was achieved by a scrum team I work with, who were working with wider business teams in order to deliver some new features to a large user base. At the start of 2023, the Product Owner discussed with me the challenges they had in forecasting completion of these features and of having to move the go-live date back more than once. They were attempting to use velocity to estimate their go-live, which was proving to be ineffective. I offered 2 alternative measures to them, based on the data available, so they had more information to use when creating their forecast.
- Service Level Expectation
- Burndown Chart Forecast Cone
In this scenario, story points were used to predict the velocity of the team in order to forecast progress to the goal. Using story points as estimates looking at completed vs planned a potential release date was forecasted.
Team Velocity = Total Story Points Completed Per Sprint / Number of Sprints
The pitfall of this approach is not only with trying to predict the size of a Product Backlog Item (PBI) using an arbitrary number. But also attempting to translate that number into time. This approach introduces too much of a margin for error into something you want to be accurate. This is a risky approach to forecasting and, in this example, it was far out on 2 separate occasions.
It’s important to note, I am not against estimates or story points if they are used for discussion and to bring clarity and understanding to a PBI for the whole team. However, in my experience, I haven’t seen them translate well for forecasting purposes.
Service Level Expectation (SLE)
A Service Level Expectation (SLE), is a metric taken from Kanban. It uses Cycle Time (average time to complete a PBI) to determine the probability of when the average PBI may be completed by. It can be presented in a sentence, like follows:
Based on the last 60 days, you have an 80% probability of completing a PBI in 21 days or less.
This SLE can then be used to forecast progress to the goal based on what is known in that moment (remember, there is always the potential for change). This statement could look like this:
The remaining PBIs will have an 80% chance of being completed in 14 weeks (w/c 27th March 2023).
There are a couple of problems with this approach: 1) the appetite of the business when presented with a probability (imagine their reaction if it’s 50%); or 2) the sustainability of a team… For example, this team works in 14 day sprints, 21 days is obviously larger than that.
In Scrum we want teams to be delivering usable increments of product every sprint, sustainability reduces risk. Without sustainability, predictability is close to impossible. In this scenario, the data showed improvements were needed with the team and, fortunately, the Team was already working on them.
Burndown chart forecast cone
The predictability cone is similar to the SLE in that it doesn’t use an estimate but just works off work completed. However, it is slightly more crude in how it is calculated, running a line straight through the middle of a graph, working on the average pace to complete a PBI.
You’ll see that we can add in thresholds here as well, a best case (blue line) and worst case (green line). This enables us to caveat any estimates with stakeholders that things could be better or worse than the prediction as we learn more (Empiricism FTW!).
I made this one in Miro, the trick being to ensure the spaces between the dots (PBIs) and columns (week commencing date) are equal on each axis. The prediction was made on the w/c 9th January 2023.
So what happened?
The goal was achieved on the 23rd March 2023 whereas my forecasts fell either side. The week after for the SLE and the week before using the burndown chart forecast cone. Overall, this wasn’t too bad and there wasn’t a story point in sight. These forecasts would have definitely become more accurate had they been updated as more work was completed. The forecasts also didn’t account for a developer leaving and a new one joining in Feb too. Remember, things change when working in complex environments.
A final thought, none of these forecasting methods is better than the other. In this scenario, due to a lack of sustainability in development, working off a PBI completion count instead of a story point was simply a better data point. Additionally, a forecast doesn’t account for the future, things change, as the Scrum guide suggests: forecasts do not replace the importance of empiricism.