Tip of the Month: February 2013
Some product
developers observe that failures are almost always present on the path to
economic success. "Celebrate failures," they say. Others argue that
failures are irrelevant as long as we extract knowledge along the way.
"Create knowledge," they advise. Still others reason that, if our real
goal is success, perhaps we should simply aim for success. "Prevent
failures and do it right the first time," they suggest. And others assert
that we can move beyond the illusion of success and failure by learning from
both. "Create learning," they propose. Unfortunately, by focusing on
failure rates, or knowledge creation, or success rates, or even learning we miss
the real issue in product development.
In product
development, neither failure, nor success, nor knowledge creation, nor learning
is intrinsically good. In product development our measure of
"goodness" is economic: does the activity help us make money? In
product development we create value by generating valuable information
efficiently. Of course, it is true that success and failure affect the
efficiency with which we generate information, but in a more complex way than
you may realize. It is also true that learning and knowledge sometimes have
economic value; but this value does not arise simply because learning and
knowledge are intrinsically "good." Creating information, resolving
uncertainty, and generating new learning only improve economic outcomes when
cost of creating this learning is less than its benefit.
In this note, I
want to take a deeper look at how product development activities generate
information with economic value. To begin, we need to be a little more precise on
what we mean by information. The science of information is called information
theory, and in information theory the word information has a very specific
meaning. The information contained in a message is a measure of its ability to
reduce uncertainty. If you learn that it snowed in Alaska in January this
message contains close to zero information, because this event is almost
certain. If you learned it snowed in Los Angeles in July this message contains a
great deal of information because this event is very unlikely.
This relationship between information and uncertainty helps to quantify information. We quantify the information contained in an event that occurs with probability P as:
Information = log2
(1/P) = - log2 (P)
As we invest in product development, we generate information that resolves uncertainty. We will create economic value when the activities that generate information produce more benefit than cost. Since, our goal is to generate valuable information efficiently, we can decompose our problem into three issues:
- How do we maximize the amount of information we generate?
- How do we minimizing the cost of generating this information?
- How do we maximize the value of the information we generate?
Let’s start with the first issue. We can maximize information generation
when we have an optimum failure rate. Information theory allows us to determine
what this optimum failure rate is. Say, we perform an experiment which may fail
with probability Pf and succeed with probability Ps. The
information generated by our experiment is a function of the relative frequency
with which we receive the message of success or failure, and the amount of
information we obtain if failure or success occurs. We can express this
mathematically as:
Information Generated by a Test = Ps log2
(1/Ps) + Pf log2
(1/Pf)
We can use this equation to graph information generation as a function of failure rate.
Note that this
graph maximizes at a 50 percent failure rate. This is the optimum failure rate
for a binary test. Thus, when we are trying to generate information it is
inefficient to have failure rates that are either too high, or too low.
Celebrating failure is a bad idea, since it drives us towards the point of zero
information generation on the right side of the curve. Likewise, minimizing
failure rates drives us towards the point of zero information generation on the
left side of the curve. So, we address the first issue by seeking an optimum
failure rate.
But, this is only part of the problem. Remember, we generate information to create economic value, and we create economic value when the benefit of the created information exceeds the cost of creating it. So, let’s look at the second issue: how do we minimize the cost of generating information?
This is done best by exploiting the value of feedback, as I can illustrate with an analogy. Consider a lottery that paid a $200 prize if you pick the correct 2 digit number. If it costs $2 to play, then this lottery is a break-even game. But, what would happen if I permitted you to buy the first digit for $1, gave you feedback, and then permitted you to decide whether you wanted to buy the second digit for an additional $1? The second game is quite different. It still requires the same amount of information (6.64 bits) to identify the correct two digit number, but it will cost you an average of $1.10 to obtain this information instead of $2.00. Why? You are buying in information in two batches (of 3.32 bits each). However, because you will pick the wrong first digit 90 percent of the time, you can avoid buying the second digit 90 percent of the time, saving an average of $0.90 each time you play the game. (In the language of options, buying one digit at a time, with feedback, creates an embedded option worth $0.90.) Thus, we address the second issue by breaking the acquisition process into smaller batches and providing feedback after each batch. (Lean Start-up fans will recognize this technique.)
The third issue is to maximize the value of the information that we acquire. The most useful way to assess value is to ask the question, "What is the maximum amount of money a rational person would pay for the answer to this question?" For example, if I let you choose between two envelopes, one of which contains a $100 bill, what would you pay to know exactly which envelope the bill is in? Certainly no more than $50, which is the expected value of picking an envelope at random. Thus, you should pay no more than $50 to acquire the 1 bit of information that it takes to be certain to obtain the $100 bill. If the amount in the envelope is $10, you should not pay more than $5 to know which envelope the money is in. In both cases, it takes one bit of information, but the value of this bit is different.
So we address the third issue by asking questions of genuine economic significance, and making sure the answer is worth the cost of getting it. In reality, we do not have unlimited resources to create either learning or knowledge; we must expend these resources to generate information of real value.
My guess is that your common sense has already reliably guided you to address
these three issues. For example, when you play the game of 20 questions, you
instinctively begin with questions that split the answer space into two equally
likely halves. You clearly understand the concept of optimum failure rate. When
you decide to try a new wine you buy a single bottle before you buy a case. You
clearly recognize that while small batches and intermediate feedback will not
change the probability that you will pick a bad wine, they will reduce the
amount of money that you will spend on bad wine. And, when your encounter two
technical paths that have virtually identical costs, you don’t invest testing
dollars to determine which one is best. You clearly recognize that information
is only worth the value it creates. If you do these things, you should have
little risk of blindly celebrating failure, success, knowledge creation, or
learning. If your common sense has not protected you from these traps, an
excellent substitute is to use careful reasoning. It will get you to the same
place, and sometimes even further.