Friday, December 21, 2012

Value of Estimate as Value of Information


The claim of this post is that estimates are information that (should) reduce uncertainty about decisions that have economic consequences, thus they (should) have value.

Here is why.

A mantra of any Agile method is that you need to produce value, and you have to do it as soon as possible. A new mantra is that estimates do not provide any value. Thus you should drop them,
Why?
If doing an activity will take time T, and you add the time t for estimating T in advance, then the total time will be T+t instead of T. Thus, given that spending time t+T costs more than spending only time T, you should not estimate.

There is an argument for estimating instead:
Suppose that the stakeholder needs to select between two features F1 and F2 with values V1 and V2.
If their costs C1 and C2 are not the same then she could decide a trade-off, for example comparing V1/C1 and V2/C2. (max value, min cost).
The cost is the time, of course.

She actually does not know exactly the values V1 and V2, and the costs C1 and C2 neither.
She can be better than the technical team in "estimating" the difference between V1 and V2, but in the same way is the technical team that should be better than her in "estimating" the difference between C1 and C2.

So it must be rational trying to maximize the V/C having a discussion with the team about comparing C1, C2...

So the estimation provides value because it determines actions that have economic consequence (and the difference between those consequences is the value of that information.)

All this means, simply, that despite estimating has a cost (t), it conveys value (value of information) as any other team activity.

(Yes, but the economic consequence of the order in picking up the F1 or F2 first or not cold be less valuable if F1 and F2 need to be done both for sure, and nobody is going to use the product before both the features will be done.  That's another story. If you think in this way for all the "already known" features F1, ... Fn of a Product, then all of this becomes a little bit unflexible, not realistic, and not... Agile anymore)

(book: How to Measure Anything, Douglasùùù W. Hubbard)



8 comments:

Tonino Lucca said...

I received some interesting questions and feedback on twitter.
"I think you argument don't take into account the uncertainty in estimates"

Answer: I don't agree. I think it does.

If I am the P.O. and the team gives an estimate for a feature, for example choosing between S, M, L or XL, then there is uncertainty reduction if it is more likely that his answer is better then mine. This information has value if it gives me option when I want to select the feature that has better chance to maximize the Value over the Cost.

Other question: "What type of uncertainty do you have in mind?"

Uncertainty is like "the extension of a 90% confidence interval".
In the example of the sizes S, M, L, XL:
As a P.O. I am 90% sure that this feature is between S and L.
The team may reduce my uncertainty claiming, after estimating, that the feature is 90% S.

However this information will not give me any value if I still have some constraints that makes me unable to pick up the feature that maximizes V/C anyway. For example may be the feature B depends on A, so I still have to pick up A first no matter what (or nobody is going to use the product before releasing both A and B, or there is no point of picking A or B because they must be done both for sure - as I alredy wrote in the post - and so on...)

About a "procedure" to measure the amount of the "uncertainty reduction" provided by estimating: It's just a matter of doing some statistic. I'd like to suggest some "visual tools" here, based on some tricks we experienced in the past (called "prisoner metrics". I will write a new post about, probably.

Another question "how long in the future you estimate". I probably already gave a wrong answer because I didn't understand the question at a first place.
Probably the meaning of the qustion is... when you will stop doing estimates?
Basically the answer is: When you realize that they don't provide any value.
This post is just about the value of "estimates", as information that gives you uncertainty reduction and so give you also the opportunity to do actions that have economic consequences (that have value).
However it is still unresolved the "value of estimating" issue, in the sense that estimating is also about giving the opportunity to discuss and understand better some features. This is another value that needs to be take in account, but probably is a little bit harder to measure.
However I do believe that it is possible to "measure anything" as says the Book I mentioned. :-)

Unknown said...

Estimation suffers from the fallacy of prediction: we believe we can predict the future.
This fallacy leads us to plan way too much into the future, which is why I asked for how long into the future you use estimation for.
If you put these two together (1)we convince ourselves that we are right when we estimate -- research shows otherwise and (2)we estimate far too much into the future (say more than 2 weeks) means that estimation is wasteful, and in most cases it actively destroys value.

Example 1: you estimate something as an S (to be worked in in 4 weeks from now). When you start working on it you find it is actually an XL because a legal implication none of the *devs* knew about. You now have a problem: do you continue working on it (the cost is now much higher), or do you stop that and lose all the functionality needed to make that feature work?

My view is that we should not estimate beyond breaking things down for the next 2 weeks or so (in smaller than 1 day items) and develop, not estimate your way into success ;)

Unknown said...

Estimation has 2 critical problems:
1) We convince ourselves that we can estimate and make decisions based on those estimates (research shows we are pretty bad at estimating)
2) We typically estimate far too far into the future (say more than 2 weeks)

These problems together mean that we often make the wrong decisions and suffer from fragile projects (we bet too much in our predictive powers).

Example 1: you estimate something to be an S that will be worked on 4 weeks from now. When you actually start working on it you find that the work is more like an XL because there's a legal implication that no dev *could* know about. Now you have a problem: do you continue to work on that feature (which is now much more expensive than you thought and had planed for)? or do you drop that feature and lose all the 4 weeks preparation work that you had worked on?

These cases are extremely common. My view is that estimation is waste. Break down items into small enough chunks (say about 1 day of work) and never more than 1-2 weeks into the future. Prioritize on value, and test that value with concrete experiments (à la LeanStartup). Estimation is waste.

Tonino Lucca said...
This comment has been removed by the author.
Tonino Lucca said...

Hi again Vasco,
I agree that we should not plan too much in the future, not only because we may make very bad guesses about how long it will take any feature, but also because we don't know what feature we will need in the future.

However I believe that we can learn how to improve estimates for example because:

1) we can improve our ability of determining a 90% interval for an unknown value.
2) we can also lean how to reduce the range of such interval (in a field where we are competent).
3) we can avoid mental traps that makes us really bad in thinking and talking, and so in everything that involve communication, including estimating (think about the "wisdom of the crowd" as an example that proves that a crowd can be really good when there is sufficient variety in their opinion about some unknown value, but the crowd need to talk)

So we can improve the quality of the estimates imo, but they may still be value-less from an economical point of view.
Even perfect estimates can be value-less if they don't have any impact in decision that have economic consequences.

In my original post I just said that it could be also the opposite: Despite they can be really bad, they still can have value if they determine some actions that have positive economic consequences


So the opportunity to estimate depends on a mix beteween the quality of the estimates, and their impact in term of the economic consequences of actions.
---

Beside this "explanation" of my point, let me go back in one of your example:
"you actually start working on it you find that the work is more like an XL because there's a legal implication that no dev *could* know about."

I think that to reason about that it is necessary to determine the extension of our uncertainty among _all_ the features of the product, so not only one feature.

We may find with hindsight that our "estimates" in average did have a 90% confidence including three t-shirt sizes among all the feature.

It may be an issue if we establish that we cannot afford anymore doing XL features, so we need to figure out how to spot in advance those XL features. A possible solution that can be experimented may be having on board a domain expert can help us understanding how to spot XL feaures in advance because he is more likely to know some implications that nobody else could know in advance.

Unknown said...

There is value in estimation, however I don't think Estimation as a practice is the way to get to that value.

Here's some arguments why:
1) for any non-trivial piece of work our range is not even close to 90%. Think about the error in estimation: from empirical research we know that our errors in estimation follow a sequewed normal distribution (at best) or a power law (at worst). Research shows that when estimators believe that the estimation *interval* is 90% accurate, reality shows it is only 60-70% so: The strong over-confidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or “almost sure” to include the actual effort in a minimum-maximum interval, the observed frequency of including the actual effort is only 60-70%.[3] (http://en.wikipedia.org/wiki/Software_development_effort_estimation)

2) your argument to "reduce the range of such interval" has more to do with the complexity / predictability of a certain task. Without going into too much detail: any task that involves 2 or more people is going to be too complex for anyone to estimate accurately.

3) The "wisdom of the crowds" was described in wikinomics to be applicable only when certain conditions are met. In a team we fail the Independence test: http://en.wikipedia.org/wiki/The_Wisdom_of_Crowds#Four_elements_required_to_form_a_wise_crowd

You do have something interesting in your comment (probably derived from practice): that estimates can have some value if they influence actions that has some positive economic consequences. This is true, however estimates are not the most effective way to uncover risks, for example.

Some companies are experimenting with "markets" where people can bet for or against a particular project, this is a way to harvest the wisdom of crowds that does not fall prey to the problems inbuilt into estimates (the lack of independence).

In summary I'd say: estimates are waste because they can't be accurate (unless the task at hand is easy enough, in which case estimates are overkill), and the information they provide would be better found in other ways (like a risk analysis/brainstorming sessions f.ex.).

Tonino Lucca said...

Yes, there are many other options than doing estimates to, say, understand the risks, planning etc...

However my point about the value of estimates was just... it depends.
Even bad estimates can be really useful.
Example: "My estimate is that to do X, using technology T1 will take between 1 week and 4 week, and using technology T2 will take between 1 month and 3 months."
They are both bad estimates, because the interval is huge and because it is likely no more than 60%-70% accurate, instead of the 90% that I could claim (if my ability to estimate intervals is in the average).

Those "estimates" are still useful enough because the alternatives are:
spending between 1 to 4 weeks ( with probability 60-70%) or 4 to 12 weeks, and the rational choise is still the first one, using the information/estimates that we have, no matter how bad they are.

Using the same example using a very "un-agile" situation: estimating can be a waste of time when you have to do X with technology T1, no matter what. (Estimates will not change any decision, so the extra time to estimate is a total waste)

For the rest, I mostly agree with you. Nevertheless being able to determine a better confidence interval for an unknown value is still a skill that can be learned according to "How to measure anything" at least.

See also "reference class forecasting" http://en.wikipedia.org/wiki/Reference_class_forecasting or "megaproject" http://en.wikipedia.org/wiki/Megaprojects, as examples of researches and tools related to how to remove optimism bias in forecasting.


Thanks again.

Tonino Lucca said...

let me rephrase this "spending between 1 to 4 weeks ( with probability 60-70%) or 4 to 12 weeks, and the rational choise is still the first one, using the information/estimates that we have, no matter how bad they are."

->

probability to spend 1-4 weeks to do X using T1, is 60%-70%,
probability to spend 4-12 weeks to do X using T2, is 60%-70%