Remove Date-riven Behavior to Achieve Agility—Forecasting Dates Amidst Uncertainty

0
(0)

Forecast using uncertainty to aid in decision-making.

The fear of failing to meet a due date can cause us to act in an overly cautious manner and encourages wasteful date-driven behavior. Because of this, if we do not have a real date constraint, we should avoid manufacturing one. Dates do not motivate; they only get in the way of delivering value expediently.

However, there is typically a desire to know how long it will take to develop and release a particular software feature or product version. A forecast can aid in decision-making. Because of this, it is a good idea to predict a delivery forecast, despite the inherent uncertainty in predicting a delivery date for software.

The three prior posts on date-driven behavior provide context for the difficulty of predicting dates in the uncertain world of software development:

  • Introduction: The first post introduces the perils of date-driven behavior, illustrating how it results in waste and inhibits agility and why you should, therefore, minimize it.
  • Learn More By Doing: The second post focuses on replacing fear with safety and learning more by doing versus planning and designing.
  • Focus on “Done”: The third post shows how focusing on getting to “Done” with high stakeholder and user engagement beats driving towards a date.

This fourth post shows how development teams can use evidence and assessments of uncertainty to forecast an expected delivery timeframe for use in decision-making.


Corrective Pattern: Forecasting Dates Amidst Uncertainty

As the prior posts in the series have shown, date-driven behavior can introduce significant waste and get in the way of delivering the “Right” product by inhibiting learning and reducing quality.

However, businesses need to understand the investment needed to develop and release a software feature or a new version of a product. Businesses invest real money in the development of new software. As such, understanding a forecasted delivery timeframe is often necessary. If used correctly for strategic decision-making, forecasts can support healthy delivery team behavior. However, we must be careful to use the forecast only as a decision support mechanism and avoid falling victim to wasteful date-driven behavior.

This pattern allows you to safely predict a date amid the uncertainty present in all software efforts. The date forecast pattern is defined by a set of rules and a method for forecasting delivery dates amidst uncertainty.

Rules

Before we describe the method of forecasting amidst uncertainty, the ground rules for forecasting dates in software must be considered.

Rule 1: Always Use a Range

As soon as we provide an exact date for something, we create an illusion of precision. It creates a false belief that we know exactly what needs to be done and exactly when we will complete it. While an exact date might create (falsely) a sense of comfort, as we have shown in the prior date-driven behavior posts, the uncertainty in software development does not allow for this level of precision.

To reflect the uncertainty of software development, always provide a date range around your forecasted date. This has two benefits—1) it reveals there is uncertainty and 2) it opens the possibility of change and variation.

The only time it makes sense to provide an exact date in software is if you have already developed, tested, and deployed the software, and it is ready for immediate use.

Rule 2: Update the Forecast Every Sprint

The most pressure to provide a date comes before work has started. Before work starts is the worst time to provide a date as the least knowledge has been gained. Delivery progress and feedback help you to gain knowledge.

While the forecast can be created before the work begins, it is at its least reliable stage. We must update the delivery forecast every sprint. There are no baselines; rather, we must use the changing reality on the ground and new knowledge to continually update our projections.

Rule 3: Use Evidence Over Prediction

Rule 3 goes hand in hand with rule 2. Since the least evidence of reality is at the beginning of the work, it is wasteful to spend too much time forecasting before work starts. Rather, start the work and use the evidence produced through delivery to inform a more reliable forecast as delivery progresses.

For instance, a new team’s velocity will not be known until they begin work and start to normalize as a team. While we can attempt to predict the team’s velocity before work begins to get an idea of when the work will complete, it is only a guess and is extremely unreliable. The team must work for several sprints until their actual velocity begins to stabilize before we can use it as a reliable predictor for the future.

Also, the scope we target for achieving a particular target outcome is a guess at the beginning of a release. Only through user feedback on delivered work can we be sure what we planned will deliver the right product. As we deliver and get user feedback, we will be more certain that the scope defined for the release is the scope the users need.

We also learn how difficult the work is as we deliver. This will help improve our estimates for the remaining backlog items.

Rule 4: Only Use Forecasted Dates for Decisions

Posts in the series leading up to this one have focused on avoiding dates due to the wasteful date-driven behaviors that emerge. If we forecast a date range for a software effort as described in this post, we must not form date-driven behavior and create an unhealthy focus on delivering by the forecasted date.

The team and its stakeholders should use the forecast to make strategic decisions only. Some example decisions might be:

  • The delivery team uses a forecast date range to determine if their release is too large, allowing them to simplify their approach or shift to other feature options.
  • The software must be available for a conference that has a fixed date. The continuously updated forecasted date range will help validate if the current scope and approach will meet the conference date. If the forecast does not meet the date, the team can decide to reduce or simplify scope.
  • The Product Owner knows that delivering past a certain date will miss the market opportunity to gain a competitive advantage. A frequently groomed forecast date range will help determine strategies for continued investment.
  • The stakeholders are only willing to invest a certain amount of money in a software effort. The continually updated forecast date range will allow them to determine if a continued investment is prudent.

The Forecasting Method

Every sprint, the delivery team will perform the following steps to update the forecast date range.

Step 1: Size the Release

At the beginning of a release and during delivery, the delivery team should refine its understanding of the size of the release. There are many techniques for doing this, including planning poker and affinity sizing for release backlog items. The goal is to relative size release backlog items in a unit of measure that equates to the unit of measure used to calculate team velocity. The unit of measure is usually story points, which are typically based on a modified Fibonacci sequence.

The release backlog should be sized any time the team acquires knowledge that refines understanding of the size of the release backlog items or when introducing new backlog items into the release. A good habit is to assess the need to resize the release backlog at the beginning of each sprint during backlog refinement.

Step 2: Assess Release Scope Uncertainty

In each sprint, the team should consider the level of uncertainty on the scope of the release. Release scope uncertainty considers the following dimensions:

  • Scope needed to deliver the “Right” product: The team should reflect on their level of certainty that the identified scope in the release backlog will achieve the target outcome. This uncertainty will be highest before work starts and lowest as we near release, assuming we have a frequent user feedback loop on sprint output as it is delivered.
  • The size of the current backlog: The team should consider their comfort in their ability to size the current backlog scope. If the team has delivered this type of scope before, they will have higher certainty about the size of the current backlog. If they have not delivered this scope before or are not familiar with the technology in play, the uncertainty will be higher. As the team delivers the scope for the release, their estimation accuracy for the remaining items will increase.

Before work begins, the high level of uncertainty requires a high range around the scope identified in the release backlog. As the team delivers the work, the range will narrow. This is typically referred to as the “Cone of Uncertainty” as described by Steve McConnell1. See Figure A for a depiction of the “Cone of Uncertainty.”

Figure A: Quantifying Risk with the Cone of Uncertainty
Figure A: Quantifying Risk with the Cone of Uncertainty
 

Step 3: Forecast a Velocity Range

For a new team who has not started delivery, calculating velocity is difficult. Don’t spend too much time trying to estimate their velocity. The best way to forecast velocity is to begin work and use actual velocity to forecast the future. For established teams, velocity is a bit easier to calculate as an abundance of historical data is present.

A solid mechanism for forecasting velocity is to use a pattern calledYesterday’s Weather2.” Just like a meteorologist uses yesterday’s weather to predict today’s weather, the team will use recent performance to predict future performance.

Typically, the average of the last three sprint’s actual velocity is a good indicator for forecasting future velocity. If the last three sprints have abnormalities, such as sprints with holidays or sprints where velocity spiked or dropped unexplainably, these sprint velocities should not factor into the average.

To account for the instability of the velocity, a range should be placed around the average. If a team is new, a greater range may be placed around the velocity. If the team is more established, a smaller range may be placed around the velocity.

Step 4: Predict a Delivery Date Range

Once steps 1-3 are performed, calculating a delivery date range is purely academic. Typically, a release burn-up chart is utilized to calculate and visualize the delivery date range. Figure B illustrates a sample release burn-up using uncertainty.

Figure B: Release Burn-up With Uncertainty
Figure B: Release Burn-up With Uncertainty

In the sample burn-up of Figure B, the team is in its 4th sprint. The known backlog size increased during sprint 1 and decreased during sprint 2.

The team applied a large uncertainty range around the known backlog size for sprints 1 and 2. The team gained knowledge in sprint 2 that allowed them to reduce the scope uncertainty range in sprint 3 and this range is holding steady in sprint 4.

The team’s actual velocity in sprints 1, 2, and 3 has allowed them to formulate a forecasted velocity range.

Using the scope uncertainty range along with the forecasted velocity range, the team can calculate a forecasted delivery range of between 4 and 11 sprints to meet its target outcome for the release.

Achieving the target outcome in 4 sprints assumes that all known scope is not necessary to meet the target outcome. It also assumes the team is operating at the high end of its velocity forecast. Delivering at the high end of the range of 11 sprints assumes that the high end of the scope range is necessary to meet the target outcome. Also, the team is operating at the low end of its velocity forecast.

This release burn-up with uncertainty allows the team to forecast a date range based on velocity evidence, a current understanding of scope, and an assessment of uncertainty.


Conclusion

Giving a delivery date upfront for software development is a bad idea. However, it is a good practice to continually forecast how long the software delivery will take based on current facts. A calculated delivery range will provide you with key data to make decisions on how to proceed.

Stay tuned for future posts in the series on how to break the date-driven behavior anti-pattern.


Other Posts in the Series


Reference

  1. Software Estimation Best Practices – Demystifying the Black Art, Steve McConnell, 2006
  2. Published Pattern: Yesterday’s Weather, Scrum Published Library of Patterns

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Leave a Reply

Your email address will not be published. Required fields are marked *