You are on page 1of 35

©

HOW TO BUILD A BALANCED SCORECARD*

Introduction
by
Arthur M. Schneiderman
The balanced scorecard (BSC) has undergone significant change since its
widespread popularization in the early 1990s. Although the first balanced
scorecard was an integral part of its creators’ strategic planning process, its
subsequent emulations focused on it as a simple instrument rather than as one
element of a total planning system. Consequently, most early adopters just took
their myriad of existing non-financial performance measures and force-fitted them
to an arbitrary framework that classified scorecard metrics into the prescribed
categories of financial, customer, internal, and learning and growth.
I’ve chronicled elsewhere the resulting common failure modes. Number one on
that list was:
“The independent (i.e. non-financial) variables on the scorecard are incorrectly
identified as the primary drivers of future stakeholder satisfaction.”
Unfortunately this fundamental misapplication of the BSC concept is still all too
prevalent.
However, academics, consultants, and practitioners alike have learned much over
the last decade. Leading edge BSC proponents recognize that a meaningful
scorecard must be viewed as an integral part of an organization’s overall
management system. But to build on its brand image, “Balanced Scorecard”
promoters have used its moniker to provide a name umbrella over its continuously
redefined and expanding boundaries. Today, in best-practice organizations, the
BSC is tantamount to their business planning system.
But having recognized that the BSC itself is only one part of a comprehensive
process, there has still been little documented about that process itself. What has
been written describes the method for its creation and use in such general terms
that a practitioner is left with insufficient detail on exactly what needs to be
done. The objective of this e-paper is to provide my view of that missing level of
detail.
In Part 1, I will describe a 9-step process that assures the identification of a
manageable and actionable set of BSC metrics that link directly to an
organization’s strategic objectives. But organizational success - just like a coin or
a magnet - has two sides: planning and doing. Successful organizations excel at
both. They do the right things and they do them right. My focus in this e-paper
will be on the planning side of that fateful coin. I refer you to my other writings
on process management (see also my publications) for more on its control,
improvement, and reengineering facets.
Part 2 addresses the difficult task of translating strategically chosen stakeholder
segment requirements into a prioritized list of internal process improvements. It is
the improvement of these targeted processes and sub-processes that will make-or-
brake the realization of strategic success. What makes identification of these vital
few processes difficult are their many interdependencies and varying impact.
Seeing through that cloud of complexity and uncertainty requires the use of some
unfamiliar analytical tools and an appropriate balance between established facts
and the organization’s collective instincts.
Finally, in Part 3, I will describe the fundamentals for extracting the appropriate
set of BSC metrics from the near-infinite list of possibilities that still exist even
after the vital few processes are identified. Finding those leveraged internal
process measures is key in achieving a successful BSC implementation.
*This e-paper was first posted on December 20, 2000 (Part 3 was posted earlier).
It will be appearing in hard copy as a Chapter in the Handbook of Performance
Measurement, Michael Bourne, Editor, Gee Publishing, 2001.

return to top Part 1 


©
HOW TO BUILD A BALANCED SCORECARD

Part 1: The Strategic Planning Process


by
Arthur M. Schneiderman
The objective of a strategic planning process is to identify opportunities where the
organization’s current or potential capabilities can be successfully and sustainably
matched against the needs of its various stakeholder groups. Success is defined by
the objective (vision and mission) of each organization. It is measured by the
value that it actually delivers to these stakeholders, relative of course, to that
provided by their other alternatives.
In a competitively based society, each of the organization’s various stakeholders
has choices. The owner’s of its capital have the option of selling that capital and
investing the proceeds in other organizations that they believe will provide them
with a greater return on their loaned financial assets. Employees have the
freedom to associate with a different organization where they expect to receive a
greater return on the time that they invest. Customers usually have the ability to
select a different product, service, or supplier for fulfillment of their needs.
Suppliers have the choice of providing the inputs required by the organization
when it wants them and at the price it’s is willing to pay. And communities can
commit their limited resources (land, infrastructure, etc.) to those organizations
that they believe will prove to be the greatest asset to their constituents.
A successful organization manages its internal processes in order to win against the
competition on these stakeholder battlefields. The strategic planning process
distributes the organization’s always-limited resources among these concurrent
challenges. How it deploys them will determine whether it survives as an entity in
order to compete again another day.
I offer the following as a generalized model for a Strategic Planning Process:

Figure 1. The Strategic Planning Process


(Click here for a PowerPoint version of this figure)
This process forms a closed-loop system. It operates in a continuous cycle, with
neither a beginning nor an end. Since most organizations implementing a BSC
already have an explicit or tacit strategy in place, I’ll start my description with the
step identified as number one in this figure. Also, for simplicity, I’ll describe the
process in terms of the “customer” stakeholder group and leave it as an exercise
for the reader to extend it to its organization’s other important stakeholders.
They usually include owners, employees, union leaders, suppliers, regulators,
communities, etc.
Step 1: Choose targeted stakeholder segments
For decades we have been admonished to make explicit in our strategy the
customer segments that we intend to serve as well as those we will leave for
others to serve. This decision is sometimes referred to as strategic intent. We do
this in recognition of the view that we cannot be all things to all potential
customers and therefore must focus our limited organizational resources on those
chosen market segments. We will secure leadership in them if we can satisfy their
members needs better than our competition. Our level of reward will depend on
our market share and the maturity and growth rate of that segment’s demand.
To make our decision on target market segments, we must understand the
opportunity space (potential market segments) and the competitive environment
as well as our own organizational competencies. For much of the later part of the
last century, we could rely on normative models to predict our chances of success
based on the assumption that cumulative experience (as determined by historic
relative market share) was its principal driver. Today we understand that
flexibility, agility, and rapid learning are more important competitive advantages
given the rapidity of technological change and the increasing contribution of often-
volatile organizational knowledge.
Making the wrong initial choice may trigger a doom loop from which there’s little
chance of recovery. However, rapid cycling of this strategic planning process can
quickly lead to convergence to a sustainable competitive position. So let’s assume
that we have initially chosen a set of related market segments where we have a
reasonable chance of long-term success. We’ll return to this assumption in Step 9
to determine its validity.
Step 2: Identify their requirements
Each customer segment is characterized by its own unique set of requirements.
Either objectively or subjectively potential customers test each candidate supplier
against these requirements. They choose the one that comes closest to meeting
their aggregate needs. They do not weight each criterion equally, and that is what
makes them different from one another. One segment might weight price much
more highly than reliability. Another may have just the opposite weighting. Once
the targeted segments have been selected, it is important to determine their
importance weighted supplier selection criteria.
Step 3: Determine performance gaps (external perspective)
By asking our targeted customers how we are doing in meeting their various
requirements we can identify our performance gaps. Our hope is to close these
gaps in order to maintain or improve our relative competitive position. We
recognize that if we do nothing, we are likely to loose ground against more
aggressive competitors who are pursuing their own improvement objectives.
Performance gaps will differ from one targeted segment to another, so we need to
apply this step separately for each of the market segments that we are currently
serving or considering.
Step 4: Set stakeholder improvement priorities
Improving requirements that are unimportant to a targeted customer segment is
often a waste of precious organizational resources that could better be used
elsewhere. It’s therefore essential that we focus our improvement efforts on
major gaps in important customer requirements. The combination of high
importance and low performance is the logical basis for ranking opportunities for
improvement. Once we have completed this step, we have essentially generated a
Pareto diagram of externally identified improvement priorities - tempered by our
own strategic objectives.
Step 5: Link stakeholder requirements to internal processes
Many organizations stop at Step 4. Doing so leaves both the responsibility and
accountability for improvement unassigned. They may achieve acceptance of the
objective but leave undefined each individual’s role in making it happen.
Naturally, with this uncertainty, they usually conclude that closing critical
performance gaps is someone else’s job. Like spectators at an athletic event, they
sit cheering in the stands, when they should in fact by out on the field as players in
this struggle to win. The key to getting their involvement is the linkage of
external improvement priorities to internal processes.
One very powerful view of an organization sees it is a collection of interacting
processes whose collective output is the vehicle for creating stakeholder value. At
the highest level are the macro processes such as product development, customer
acquisition, production, procurement, and human resources management. But
processes are fractals. As we look at each of their steps through a virtual
magnifying glass, we see imbedded within them similar looking processes … and
within them sub-processes … and within them micro-processes. Every employ has
a daily job in which they execute one or more of the steps that are contained
within this hierarchy of value creating activities. Their personal link to the overall
goals and objectives of the organization flows with the output of these processes
as they cumulatively create more value and the consequent increased stakeholder
satisfaction.
Step 5 identifies the relationship of each process within the organization to the key
stakeholder requirements identified in step 2. It is the transition step from the
external to the internal perspective.
Step 6: Establish process improvement priorities (internal perspective)
Knowing which internal processes drive the various targeted stakeholder
requirements (from Step 5) and which of those requirements are most in need of
strategic improvement (from Step 4), we are now in a position to set internal
process improvement priorities. Once completed, we have identified the focal
points for changes in the way those involved should do their daily jobs.
The organization can now concentrate its limited resources on the improvement of
those leveraged processes with the knowledge that this will produce the greatest
strategic return on the investment of those precious resources. And each
individual who spends their time executing those key processes will understand
why its improvement will be worth their effort. They’ll realize that their help is
likely to be critical to the organization’s strategic success … that they are an
important link in that chain of critical actions.
Step 7: Establish metrics and goals for the process improvement priorities - the
Balanced Scorecard
In my experience, few organizations today make it through Step 6. Identifying
with confidence those critical internal processes whose improvement will have the
greatest strategic impact is no easy matter. More and more, they are hidden
behind a cloud of complexity and confounded by uncertainty and chaos. But for
those who do, they now face several nearly daunting challenges:
Choosing metrics: What exactly should we measure?

Setting Goals: How will we define success?

Avoiding over commitment: Do we have the organizational capacity to do all of


it?
Defining measures of the output of a process that relate directly to stakeholder
requirements is usually straightforward. But these results metrics are not directly
actionable. We need to identify the internal process metrics that are the drivers
of the desired improvement in these results. Once we have successfully identified
them, we need to set time-based goals. In general, they will be stretch goals:
difficult but not impossible to achieve.
I can’t imagine an organization in today’s world where people are sitting around
looking for something to do. Everyone already has a pretty full plate of work.
They can only squeeze in a limited amount of time to work on process
improvement without adversely affecting the performance of those daily jobs. In
other words, organizations have a limited improvement capacity. Asking them to
do everything will guarantee that the easy ones, not necessarily the most
leveraged ones will get done first. We need to filter the priorities established in
Step 6 against this limited capacity. In doing so, we create a cut-list. We can do
the things above the cut-line, but we don’t have the capacity to do the ones below
that line … at least not right now. Acknowledging our limited capacity diffuses the
organizational paralysis usually brought on by over commitment.
To focus everyone’s attention on the short list of improvement priorities and goals,
we create the instrument that has been called the Balanced Scorecard. It captures
the results of all of the proceeding steps on a single sheet of paper. It represents
a set of metrics and their associated tangible goals that are the best that we can
do in advancing our strategic objectives, subject to our available organizational
constraints. In a sense the balanced scorecard is merely a rallying flag for all of
the effort that has gone into its creation. It is not an end, but an intermediate
means for the strategic planning process.
The resulting balanced scorecard is the organization’s guide to its improvement
priorities. Because it is rooted in the process view of the organization, it can be
easily linked from the corporate level down through the process hierarchy to the
teams and individuals that are the only ones that make things happen.
Step 8: Improve critical processes
If it were easy to close the gap between current performance and the
improvement imperatives established in the previous step, those gaps would have
been closed long ago. Certainly focusing the organization’s energy around a few
specific objectives is a great help. But there is a wide range of approaches that
can be used to address these vital few gaps, and they can lead to strikingly
different rates of improvement.
The fastest method is to assemble a cadre of process engineers to fundamentally
redesign each key process; but this is also the most expensive way to do it. Using
the traditional trial-and-error approach not only takes too long, but its actual cost
rivals that of the use of an army of process experts. Fortunately, there is a low
cost, high-speed approach that was pioneered in the 1930’s by Kepner and Tragoe
and refined in the 1960’s by Japanese TQM practitioners. This improvement model
uses teams of process executors who are trained in the basics of the scientific
methodology and spend a portion of their time (typically 5-10%) improving their
processes. This has proven to be the best way of closing performance gaps when
many processes contribute to them.
I am not in any way suggesting that processes that do not make this list should not
be improved. A basic cornerstone of TQM is that ALL processes should be
continuously improved and that EVERY employee should spend a portion of their
time in those activities. What Step 7 does is set priorities for those improvement
efforts. Process teams should focus their efforts on improving those outputs that
are directly derived from that step.
Individuals involved in multiple processes should concentrate first on those that lie
on this critical strategic path. When teams or individuals do not have a clear role
in strategic improvement priorities, they should still spend a portion of their time
improving the way that they do their daily jobs. But they need to recognize and
accept that scarce resources, such as training and internal and external experts, as
well as management attention will go first to those who are working on improving
the critical processes.
Step 9: Reassess strategy
Organizational defense mechanisms often mandate that our processes be run open
loop. We like to plan and do, but have a natural reluctance to check subsequent
results against the original plan and take corrective action based on what we learn
from that diagnosis. We find this distasteful because the result of the check
process all too often is blame rather than learning.
When I first met Ed Deming he was around 80 years old and often noted that 80%
of the root causes of defect generation were the process and only 20% the people
executing those processes. Each year that went by, that 80% number seemed to
grow by 1%. Ed died at age 94 and the last time I saw him he said: “nearly 95% of
the problems lie in the process, not the people.” I wonder what he would have
said had he lived to be 100?
Once we outlaw blame as a management reaction and replace it with constructive
learning, we can hope to continuously improve the strategic planning process
itself. That is the purpose of Step 9. Before reaching this step, we have identified
exactly what we need to do in order to achieve our strategic objectives. We have
focused every bit of our available organizational capacity on those required
actions. We now ask, “Did we get the results we planned for, and if not, why
not?” Out of this diagnosis we can understand weaknesses in our strategic planning
process and make improvements for the next cycle. In doing so, we are learning
how to plan and act more successfully, and that, after all, is what this is all about.
One sobering result from this reflection step may be that we are doing the best
that we can, but we do not have the organizational capacity to do what is
necessary in order to achieve our strategic objectives. Often this is the result of
competitors who have greater organizational capacity or process know-how, so
that although we’re improving, we’re inevitably loosing ground to them. This
painful knowledge should prompt us to seek other competitive niches were we
have a chance of winning or face up to the unpleasant reality that our owners
remaining equity might best be used by them in some other endeavor. Since very
few organizations use their process improvement capacity to their highest strategic
advantage, mastery of all of these nine steps has the potential to produce some
really unexpected, dark horse winners in the ever-present competitive race.
Dealing with today’s strategic planning reality
I’m fascinated by the current notion, often promoted by self-serving consultants,
that there’s a simple, secret formula for developing a good strategy and it’s called
a balanced scorecard. “Buy our BSC software,” “Attend our BSC seminar,” or
“Retain our BSC team of experts” and in a few short weeks or months you’ll have a
winning strategy.” And the evidence does seem to suggest that Abe Lincoln may
have been right: “... you can fool (nearly) all of the people, some of the time...”
But the truth is that developing and implementing a successful strategy still is a
very difficult challenge. Their are several contributing factors:
increasing real-world complexity,

nonexistent data,

chaos and uncertainty,

getting organizational commitment and buy-in.

By any measure, organizational life is getting more and more complicated.


Everything seems to be both interconnected and important. Clear visions of the
future are obscured by this complexity and each group within the organization
tries to see through that cloud with their own uniquely colored glasses. The ideal
solution - fact based knowledge - is becoming both expensive and time-consuming
to generate. In many instances, the important things “are both unknown and
unknowable” to quote Ed Deming. We live in a period of unprecedented change.
The future is increasingly unpredictable as wave after wave of technological,
sociological, and political change break over us. It is a truly exciting time to live
in, but an equally frustrating time for strategic planning.
Finally, as organizations transform from physical labor to knowledge based,
employees are less willing to simply do as they are told. They need to be enrolled
in the strategy before they will work hard to make it happen.
Given these formidable challenges, how can an organization maximize its chances
of developing and implementing a winning strategy? Notice that I said “maximize
its chances,” not guarantee its success. That’s the best that any organization can
hope for given its tumultuous environment. Here’s my advice:
Take every feasible opportunity to expose employees first hand to that
environment and make sure that they share what they learn with others within
the organization.
Maximize employee involvement in the strategic planning process itself, by
assuring that those with the best knowledge contribute to its relevant steps.
Use tools that can analyze “fuzzy data”, which often is in the form of sentences
rather than hard numbers.
Seek group gut feel, rather than that of individuals who may be distant in both
time and intimacy with the current situation.
Make strategy development an open rather than a secret process within the
organization
Sure, there is a risk that by running a wide-open, highly visible strategic planning
process a competitor may learn something that they can use against you; but that
danger is grossly exaggerated. In reality that risk pales compared to the cost of
poor internal alignment caused by a strategy hidden behind a shroud of secrecy.
All employees have a “need to know” if they are to contribute effectively to the
organization’s success.
How an organization executes this 9-step strategic planning process will greatly
influence its probability of success. At one extreme, members of the strategic
planning department can sit around an isolated table and talk through each of the
steps to come up with a scorecard and its associated metrics and goals. In my
experience, that approach has a low probability of producing a decisive scorecard
and a convincing call to action to those whose efforts are needed to make it
happen. At the other end of the practical spectrum, the strategic planning
function can orchestrate a broad based effort that synthesizes both internal and
external knowledge into a compelling and actionable plan.
In doing so, they will encounter difficulty in processing all of the information and
opinions that are generated unless they use some framework and an appropriate
toolset for drawing actionable conclusions from the resulting maize of
information. That’s the purpose of Steps 1a, 2a, and 3a in my model. By
numerically weighting the strategic importance of the various stakeholder
segments, each segment’s hierarchy of requirements, and their perception of our
performance on each of their important ones a list of improvement priorities can
be generated that separates Juran’s “vital few” from his “important many.” Part
2 will expand more on these “a” steps.
In this, Part 1 of the article, I have described a 9-step framework that I believe
represents a comprehensive process that has as one of its many important outputs
a set of balanced scorecards that deploy strategic goals down to the action agents
that really make strategy happen. In the next two parts, I will describe in detail
the actual methodology that I use in implementing Steps 1-6 (Part 2: Setting
Process Improvement Priorities) and Step 7 (Part 3: Selecting Scorecard Metrics).
Step 8 is the theme of my Process Management Model.
 Introduction return to top Part 2 

©
HOW TO BUILD A BALANCED SCORECARD

Part 2: Setting Improvement Priorities*

by
Arthur M. Schneiderman

Preface
I’m a long-time advocate of the KISS principle: “Keep it simple, stupid,” or
its more formal ancestor known as Ockham's razor. But as problems
become more complex, so unfortunately do their simplest solutions. Scan
ahead in this part and your initial reaction may be that what I’m proposing
looks awfully complicated. But, if there’s a simpler way of getting to a
truly effective answer, I’ve yet to find it; nor am I aware of anyone else
who has.
That’s because one of the inevitable consequences of our current form of
progress is that over time it creates ever-increasing complexity. We can no
longer manage that complexity with the basic toolset that worked in a
simpler, bygone era. Those tools helped in understanding systems where
the whole effectively behaved as the sum of its individual parts. The tools
were used to break a big problem into a set of small, manageable pieces.
By optimizing the pieces, we could expect to optimize the whole system.
The very best of mangers could even do this in their heads.
Today, complexity arises from the increasing interdependencies between
the many small pieces of a big issue. The response “it depends” that once
served as a ubiquitous excuse, now takes on legitimate meaning. The
interdependencies become further compounded by their eventual non-
linearity. Together these two effects have pushed the critical problem
space well beyond the capabilities of simple tools and individual gut feel.
More and more often we are confronted with situations where the whole is
much greater than the sum of its individual parts. The setting of process
improvement priorities now resides in that elusive domain. Yet it is
essential to identify the real improvement priorities, not just for the
effective use of limited organizational change capacity, but also to weave
the convincing story needed to marshal organizational support and buy-in.
Even when an insightful executive can see through that cloud of
complexity, verbal explanations are ineffective in transferring his gut feel
to others. They must take his conclusions on faith. But today, fewer and
fewer organizations can rely on faith as their alignment mechanism.
Knowledge workers in particular demand a compellingly and logical
argument before they will sincerely commit to “making it happen.”
In 1979 the Japanese Union of Scientists and Engineers, the driving force
behind Japan’s TQM revolution, codified a set of tools that they called the
7-Management and Planning Tools (or 7-MP). Over the last thirty years the
7-MP have proven their effectiveness in the achievement of consensus or
what we might call “collective” or “group gut feel.” It’s one of those
tools, the Matrix Diagram, which I will be using here.
Other tools useful in dealing with this increased complexity have been
around for half-a-century. The challenge is to choose the simplest of these
tools that can adequately address the issue at hand. Oversimplifying the
problem in order force-fit it to our more familiar approaches can only
create the illusion of understanding, which cannot be a sound foundation
for action. So be forewarned that what follows, in my view is the least
complicated way of correctly identifying strategic process improvement
priorities in today’s increasingly complex environment.
This Part describes a methodology for deriving process improvement priorities from
an organization’s strategy. It relies heavily on the framework used in Quality
Function Deployment11 (QFD). That framework uses a series of interrelated
matrices to numerically define the strength of the causal relationships that exist
between the “what’s” and “how’s” of effective planning. As you will see, it
significantly extends the use of simple casual-loop diagrams (as used for example
in BSC Strategy Maps) that only serve to identify major causal linkages. By
quantifying the strengths of these linkages and providing an aggregation
mechanism, this approach often uncovers pervasive process improvement
opportunities that would be missed when only the most obvious dependencies are
considered. Furthermore, since its output is a numerically weighted list of
strategic process improvement priorities, it helps us get the greatest strategic
bang for the organization’s limited change capacity buck.
We will start by looking at various strategies and their relationships to segmented
stakeholder requirements. This will allow us to place a strategically chosen
“importance” weighting on each requirement. In doing so, we explicitly identify
the specific stakeholder segments that we choose to serve and by implication,
those that are not on our strategic agenda. Next, we will determine actual
performance, both absolute (based on customer needs and wants) and relative
(based on competitor performance) and combine strategic importance and
performance to generate a numerical scoring where the higher the value the
greater is the strategic need for improvement of that particular stakeholder
requirement.
Our second matrix defines the relationship between stakeholder requirements and
each of the organization’s various value creating processes. It quantifies the
impact of each key internal process on each of the stakeholder requirements.
Finally, we will combine improvement priorities derived from the first matrix with
process linkages from the second to produce a process improvement prioritization
list. This list will represent a scored ordering of processes in need of improvement
in terms of the impact of these improvements on stakeholder satisfaction and,
therefore, strategic success.
As I will show, this approach is amenable to various levels of detail. At one
extreme, it reduces to a simple normative model that states “if this is your
strategy, than this is what your targeted stakeholders expect and these are the
processes you have to get right in order to satisfy those expectations.” For
simplicity, that’s the example I’ll use here. At the other extreme, detailed studies
may be necessary to determine the organizations real vs. professed strategy,
actual customer requirements by targeted segment, perceived performance,
organizational barriers, etc. Where in this spectrum a particular situation lies
depends on the level of detail necessary to achieve the required consensus for
action. Often this is determined through a process of successive approximations,
starting with the simple normative model and adding more detail until that
consensus is reached.
One definition of consensus is the achievement of a state in which the least
supportive member of the group “can live with” the majority’s view. But a
consensus for action often requires a much stronger commitment from that last
individual, particularly when their active support and participation is required to
make that action happen.
Stakeholders and Their Requirements
Organizations have a number of stakeholders. Generally, we identify them as:
customers,

stockholders or owners,

employees,

suppliers, and

the communities in which we do business.

In some cultures, the environment and future generations are being added to this
list (see The Fifth Fitness). In some industries, there are multiple customers. For
example, in higher education customers can include parents, future employers,
academic peers, and research sponsors, as well as students and alumni. In
healthcare not only patients but also doctors, hospitals, regulatory agencies, and
insurers needs must be addressed. Where appropriate, distinctions need to be
made between historical, current, and future requirements, as well as different
“classes” of stakeholders such as large corporations, small businesses and
individuals.
An organization must identify its strategy and the key requirements for each of its
strategically chosen stakeholders. For example, is its stockholder strategy income,
growth or non-profit driven? If it is income driven, then its targeted stockholders
will place a high weighting on a steady dividend stream and a stable stock price.
They will be satisfied with average returns on their investment. On the other
hand, the stockholders of growth driven companies do not value dividends, accept
above average price volatility, but demand strong long-term growth in stock price.
They expect to be compensated for higher volatility (or b) with above average
long-term returns. The owners of non-profit organizations usually have non-financial
expectations for the return on their investment.
Employee related strategies range from nurturing to competitive. Employees in
nurturing organizations hope for security, lifetime employment, liberal benefits,
low stress and a family-like environment, while those in internally competitive
companies seek an entrepreneurial environment with rapid personal advancement
opportunities. They place much higher value on short-term rewards than on long-
term job security.
Obviously, the various stakeholder strategies need to form a self-consistent set.
They are not in general independent. Income driven companies tend to have
nurturing employee strategies, while growth driven companies often have more
competitive employee strategies.
Strategies and the Treacy and Wiersema Value Disciplines
As you can see from the above examples, the strategy is really a name for a
particular profile of targeted stakeholder requirements. The name only takes on
general meaning if most companies or business units can be assigned to one of the
identified categories based on similarity of their targeted stakeholder
requirements.
One such recent classification system is that of Treacy and Wiersema2 (T/W). They
have defined three “Value Disciplines” as a way for classifying companies’
customer strategies. In the remainder of this Part, I will be using the T/W model
as an example of the application of this methodology. Using their one-dimensional
view of the organization’s stakeholders greatly simplifies my description of the
elements of the methodology. But:
Please keep in mind that the T/W model applies only to customer
strategies. All stakeholder strategies must be considered if a
robust prioritization is to be achieved. Omission of a stakeholder
group often will lead to priorities selected at their expense. For
example, the T/W approach alone will probably give the wrong
answer if applied to a company whose most important strategic
imperative is increased stockholder value through growth.
Customers do not usually value the growth of their suppliers.
Therefore, revenue growth generating processes will tend to be
de-emphasized when only the customer perspective is taken into
account. So in applying what follows to a particular company
situation the T/W Value Disciplines MUST BE augmented or
replaced with a similar type classification for the all of the
important stakeholder strategies. The methodology for doing this
is quite straightforward.
T/W identify three Value Disciplines, which they called “operational excellence,”
“product leadership,” and “customer intimacy”:
Companies pursuing an operational excellence strategy provide the lowest total
purchase cost to their customers by providing high quality (conformance to
specification), low price, and ease of purchase. They accomplish this by
streamlining processes to minimize costs and hassle, standardizing, providing high-
speed transactions, and creating a culture that abhors waste and rewards
efficiency.
Product leadership companies provide the best possible product to their customers.
They focus on creativity and rapid commercialization. They relentlessly pursue ways
to leapfrog their own products before someone else does. Intermediate milestones,
keeping on track, and celebrating interim victories, characterize their product
development process. They operate a loose, entrepreneurial organization, are
results driven, and encourage individual efforts.
Customer intimate companies provide their key customers with the best total
solution to their problem. Their focus is on individual key customers rather than
markets. Their most important process is solution development, which is
characterized by delegated decision-making and specific rather than general
solutions.
Key Customer Requirements
Let’s now look from the perspective of customers. They have a portfolio of
requirements and will most often choose the supplier that best meets them.
There are many ways to define the general set of customer requirements. Often
they need to be industry specific. For manufacturing, the set of requirements I
usually use is as follows:
1. Product Features
a. Performance Specifications. These are defined by the performance
characteristics of the product relative to competition. Often they
relate to speed, accuracy, resource usage, size, etc.
b. Fitness for use. Does the product do what I need to have done?
c. Fitness for latent needs. Does the product meet an important need
that I did not previously know I had?
d. Aesthetics. Is the product visually appealing?
2. Quality
a. Conformance to specification. Does the product actually perform as
specified when received?
b. Reliability. Does the product continue to perform as specified over
its useful life?
c. Durability. Is the product robust to normal wear and tear?
d. Serviceability. Is the product easily serviced when needed?
3. Cost
a. Price. This is the actual realized selling price, after discounts, etc.
b. Cost of ownership. The additional life-cycle costs I incur with the
product including inspection, inventory carrying costs to cover poor
delivery, rework costs, warrantee costs, etc.
4. Availability
a. Quoted Lead Time. Ability to get a commitment to receive the
product when I want it.
b. Minimum/maximum order size. Ability to get the product in the
quantity that I need.
5. Service
a. Delivery. Past performance to committed delivery dates.
b. Responsiveness. Broadly defined, this is the ability to get timely
answers to all queries.
6. Relationship
a. Willingness to partner.
b. Reputation.
In any particular situation it is important to replace the above list with an
appropriate classification of key customer requirements. These requirements
answer the question: What do our customers consider in making their purchase
decision between alternative products and/or suppliers?
Relating Strategy to Key Customer Requirements
If we consider customers using the above purchase criteria, and map them against
the T/W Value Disciplines, we arrive at Figure 2.
Figure 2. Relating Strategy to Customer Requirements
(Click here for a PowerPoint version of this figure)
The central part of this matrix arrays the three Value Disciplines against the list of
possible customer requirement. The symbol used at the intersections represents
their degree of relationship. For example, the double circle shows that there is a
strong relationship between Product Leadership and Specifications. The single
circle shows that there is a moderate relationship between Customer Intimacy and
Ownership Costs. The triangle denotes a weak relationship between Operational
Excellence and aesthetics, etc. Blank cells denote no significant relationship.
Implicit in the use of this tool is that these relationships remain essentially
constant over the appropriate planning period, which is typically a year. By
regularly revisiting them, the matrix can be updated to better reflect the current
situation. Also, for simplicity I have omitted an additional step often used in QFD.
In that step, we examine the interrelationships between the various requirements
to identify conflicts and reinforcements. We capture them in what are called
“roofs” and use them to identify the impact of candidate changes in one selected
requirement on the others. This becomes necessary when the improvement of one
requirement can worsen performance on another. For example, adding features
may be offset by an undesirable increase in price. There are also synergistic
improvements. Quality improvement usually leads to a reduced cost and increased
responsiveness. If changing degrees of relationship and linkages between
requirements becomes important, I generally abandon this entire approach in favor
of System Dynamics simulation modeling since it is optimized for those dynamic
situations.
The filled in matrix in Figure 2 represents my interpretation of the operational
definitions of the different T/W strategies. For example, the matrix defines a
“customer intimate” company as one that sets its highest priority on providing
products and services that meet customers needs, including latent needs, while
being both responsive and willing to form collaborative relationships.
Furthermore, it makes sure that it has competitive specifications, low post-
delivery quality and ownership costs, and that its reputation is consistent with
these goals. Finally, it ensures that delivery and minimum order size do not
conflict with its higher priorities. Its customers are indifferent to the blank
requirements unless performance drops below an easily maintained level.
Once filled out, the matrix becomes the dictionary that defines the various
strategies. As you look across each row, you can clearly see that each strategy has
its own distinctive signature. Should a new customer segment appear that has a
significantly different set of key requirements, a new name must be created and
added to the list of strategies to capture that unique segment.
In filling out the matrix, I have adhered to some simple pragmatic rules. For it to
be useful, the matrix should be sparsely populated. There is a tendency for people
to see strong relationships between all of the elements. If this happens, than the
matrix looses its ability to distinguish the different strategies. When working with
a group of people, a facilitator can help by asking questions such as “what is the
most important relationship?” or “where is the relationship very weak or
insignificant?” or “which is more important ‘a’ or ‘b’”? A good goal is to have 40%
- 60% of the elements blank and a fairly uniform distribution of strong, medium,
and weak symbols. Looking along both rows and columns, there should be
significant differences in the degree of relationship. In other words, the strategies
should look different from one another. The use of a non-linear weighting scale
will further help in combating too many unimportant relationships.
In developing or refining a matrix, a team may encounter significant disagreement
about a relationship. If progress is to be made, the team should make a tentative
choice. It can then go back after completing the exercise to test sensitivity of the
conclusions to that particular relationship. This is made easy through the use of
QFD specific or spreadsheet software. I recommend QualiSoft’s QFD Designer,
which I used to prepare Figures 2 and 3. Usually many relationships have to change
significantly for it to make any difference in the overall conclusions. If sensitive
relationships are found, than further study of them is required. For example, if
improvement priorities change depending on how important reliability is to
customers, than a small focused survey can be done to answer that specific
question. Consensus and buy-in are essential parts of this process and can only be
achieved by bringing actual data to significant areas of disagreement.
There are two alternatives for the next step. If the organization knows which of
the three strategies it is following, then “1” is used in that strategy’s column entry
and “0” is entered for all of the others. The “importance to customer” row is
calculated by replacing the symbol in each matrix element with the numerical
weight for that symbol, multiplying by the number in that row of the “strategy”
column, and adding the resulting numbers by column. In this case, the result
would simply be the weights for the chosen strategy.
However, the organization often determines that its business is or should be split
among the three value disciplines, say 70%-20%-10%, and that its internal processes
do not differentiate between orders from customers in different segments3. In this
case the “strategy” column would contain the numbers .7, .2, and .1 (always
totaling 1.0) and the same calculation would be made to determine overall
importance to customers. Our purpose here is to discount important requirements
for the less strategically significant customer segments.
The particular weights chosen here, 9-3-1, are used to accentuate the differences
in relationships. This is a common set used in QFD. Others include 5-3-1 and 3-2-
1. Again, sensitivity testing using different weighing schemes can determine the
robustness of the conclusions. What is really important is that items toward the
top of the list really belong there and visa versa.
Sometimes, the organization cannot agree on which value discipline(s) it is
following. This may result from lack of data, multiple strategies, or
inappropriateness of the strategy classification system to their particular business.
In this case, a second approach may be necessary: a market segmentation study.
One way of doing this is through surveys or interviews of a representative sample
of key customers (50-100). This sample can include past, present and potential
future customers and non-customers (i.e., customers of competitors). Each
customer is asked to distribute 100 points between the key customer
requirements.
It is also useful to uncover trends in their point allocations by asking for significant
differences in how they would have distributed the points five years ago and what
they think might be requirements of increasing and decreasing importance over
the next five years (remember, the total stays at 100). For example, point
allocations to quality and delivery have tended to drop, as they have become
“givens” for doing businesses, while relationship, JIT delivery, and e-commerce
are likely to increase in importance in the future. At the same time, need for
improvement of the organization and its principal competitors can be ascertained
using a scale of zero (low need) to ten (high need) for later use.
The resulting data are sorted into groups of customers having similar key customer
requirements. This can be done using statistical sorting techniques or by
subjective means. I prefer the latter. Translating the point allocations into bar
charts and laying them out on a table, they can be visually grouped into similar
profiles or customer fingerprints. Occasionally, an organization might require
more rigorous analysis although in my experience the increased expense adds little
or no real value.
It is worthwhile to mention here the techniques developed by Noriaki Kano4 for
distinguishing requirements that are “delighters”, “satisfiers”, and “must-be’s
(without it, they are dissatisfied).” This simplified form of conjoint analysis is
widely used in Japan.
Often, industry surveys published in trade journals or analyst’s research reports
can be used in place of, or as a adjunct to direct surveys or interviews. This
reduces the cost of determining key customer requirements but at the price of
customer specificity and interactive learning through the interview process. Either
way, the result is a direct numerical scoring of key customer requirements by
importance to them (the higher the points, the greater the importance).
The resulting numbers for a specific customer segment are entered into the
“importance to customer” row of the matrix. This time the calculation is run in
reverse, multiplying the weights by the importance, and now summing the result
across the rows and entering the sum into the “strategy” column. Ideally, one of
the numbers in the strategy column will be much larger than the others. This
represents the appropriate Value Discipline being followed. If there is no clear
“winner”, then the T/W model is not useful for this market segment. What we
have in fact done is used the methodology as a diagnostic to determine the
appropriate strategy name based on key customer requirements. If the T/W
names don’t fit, then we can give the new profile its own, unique name.
Assessing Need for Improvement
The objective to this point has been to rate the key customer requirements in
terms of importance to customers in the targeted market segment. We did this by
using the appropriate T/W Value Discipline or by direct measurement. The next
step is to determine need for improvement. I’ll be assuming that the product of
“importance to customer” and “need for improvement” is a good indicator of
“improvement priority.” For those who are unsettled by this assumption, I refer
you to the emerging branch of mathematics known as “fuzzy logic.” A more
rigorous approach would be to use the utility function from economics theory, but
that would represent a much more complicated refinement.
Here we have three alternatives:
1. By entering “1” in the “need for improvement” row, we are in
effect determining the key customer requirements you need to
get right in order to satisfy those customers. In the next step,
this will produce the enabling business processes or core
competencies required to achieve leadership in this strategy or
Value Disciple.
2. By entering absolute need for improvement in the “need for
improvement” row, we are in effect determining the
performance gap relative to customers perceived needs. This
will lead to a prioritization of improvements most useful to the
market leader in maintaining or increasing its leadership
position. There are two sources for these data:
a. Consensus voting by knowledgeable insiders.
b. Direct data from customers. For example, if we asked
customers to rate our performance on a scale of one to
ten, where ten would be their ideal supplier, then the
difference between our score and ten would be an
indication of our absolute need for improvement on that
requirement.
3. By entering relative need for improvement in the “need for
improvement” row, we are in effect determining the
performance gap relative to our best competitor with respect
to that requirement. This will lead to a prioritization of
improvements with the objective of gaining share against the
market leader. Again, there are two sources for relative
performance data:
a. Consensus voting by knowledgeable insiders.
b. Direct data from customers. For example, customers
can be asked to rate our performance relative to each
competitor on a scale of one to ten. The numerical
difference between us and the market leader, or the
best in class for each requirement can then be used as a
measure of “need for improvement.”
“Need for improvement” scores can be determined in this way depending on the
prioritization objective, be it:

“what do we have to get right?”,

“what do we have to do to maintain leadership?”, or

“what do we have to improve in order to gain market share?”

This is also the place where trend data can be used to explain past performance
and to predict future areas in need of improvement.
It is “nice” to have the importance to customer row total 100 and the “need for
improvement scores” be based on the original range of from zero to ten. This can
be accomplished be re-normalizing and rounding-off the entries where necessary.
Linking Customer Requirements to Business Processes
We can now turn to our second matrix. This matrix relates the key customer
requirements to the underlying business processes. There are many ways to
classify business processes. The one I will use here is the system described by Tom
Davenport4. We will use the requirements improvement priority weights
determined in the previous matrix. Our objective is to identify the impact of each
business process on each of these key customer requirements. Following the same
rules as previously described, figure 3 represents my view of these relationships.
Figure 3. Linking Requirements to Processes
(Click here for a PowerPoint version of this figure)
This matrix contains the essence of an organization’s understanding of its business
processes. It is probably unique to a given industry and market segment. In its
detail, it may be dependent on each individual organization. In a sense, it
captures the organization’s knowledge of the internal drivers for customer (or
stakeholder) satisfaction. When done by a group of process experts, it constitutes
their collective wisdom as to the key business drivers in their particular industry.
It is the truly proprietary part of what an organization learns about itself in
applying this approach.
One of the most important properties of this matrix is that it is not diagonal; there
is not a unique one-to-one correspondence between a key customer requirement
and a single business process. Consider, for example, on-time delivery. Businesses
do not usually have an on-time delivery process, staffed by an on-time delivery
department and led by a Vice President of on-time delivery. On-time delivery
performance depends instead on many independently managed processes within an
organization (see for example my article on “Metrics for the Order Fulfillment
Process”). In figure 3, the major drivers are manufacturing, logistics (supplier
delivery), and information management (scheduling and MRP). It is this multiple-
dependency that creates an interconnected business “system,” which in turn
causes the need for this approach to prioritization.
Once the matrix is complete and the customer based improvement priorities
transferred from the first matrix, the initial priority can be calculated. This is
done by multiplying the weights by the improvement priority and summing the
columns. But before the final improvement priority is determined, the issue of
degree of difficulty or organizational readiness must be addressed.
Organizational Difficulty
Processes differ in complexity, both from a technical and people perspective.
Improvement is more difficult in a process where the root causes relate to human
behavior then it is for a process where only equipment or methods need to be
changed. Also, data provides the basic fuel for the improvement process. Can the
needed data be generated by the improvement team or does it have to come from
someone else? Cross-functional processes can be complicated by conflicting
objectives and ever-present politics. Since our goal is rapid improvement in
results, we need to raise the priority of processes that can be improved quickly
and drop the priority of the more difficult ones. We do this by adding the row
titled “organizational difficulty” to the matrix.
One very interesting commonly observed phenomenon is that “success breeds
success.” Over time, many of the initial organizational barriers dissolve on their
own, making the passed-over process improvements more easily tractable. Often,
the elimination of the old culture of blame is the key to this transformation.
Organizational difficulty is characterized using a subjective scale ranging from “1”
(low) to “5” (high). In practice, teams can easily assign values, since the
consideration becomes the number and severity of issues rather than who is at
fault. Once the organizational difficulty is established, the final priority for
process improvement is determined by dividing the initial priority by the
organizational difficulty and rescaling.
The QFD Designer software includes a bar graphing capability that makes the final
results for each matrix quickly apparent. The use of the symbols rather than
numbers in filling out the matrices serves a similar role in the visual display of the
relevant information.
Performance Goals
The final step in completing the matrix is to determine principal performance
metrics and their associated goals, at least for the high priority improvement
targets. These goals must be aggressive yet achievable. When met, they would
move this process from its current high to a significantly lower priority for
improvement. It is these performance metrics and goals that have earned their
place on the appropriate BSC.
In addition to my writings on the half-life method for goal setting, Part 3 will
describe a systematic approach for identifying the appropriate measures and
metrics for each of the resulting strategic processes improvements.
Results for the Normative Model
Figure 3 has been completed for an organization successfully pursuing operational
excellence. The improvement priorities were determined based on customer
requirements rather than performance gaps. Organizational difficulty was
assumed the same for all processes. Principal performance goals are based on an
organization that is delighting its customers (i.e. there’s no customer identified
need for improvement). The resulting process priorities are score ordered in figure
4 in terms of decreasing priority.

Cu
Ke
Ra
No
y
w
rm
m
Ma
46
21
Bu
ali
ul
Sc
nu
6
sin
ze
ati
or
Cu
24
33
11
fa
es
d
ve
e
st
9
ct
so
Int
23
43
11
Sc
%
uri
Pr
eg
4
or
m
ng
Hu
20
53
9
oc
rat
e
er
m
5
es
ed
Re
Cu
17
61
8
an
squ
Lo
st
8
Re
gis
Pr
17
69
8
ire
o
so
tic
od
7
m
ur
As
17
77
8
ser
uc
en
ce
se
5
tts
Ac
Pe
15
84
7
tsDe
Id
qu
rfo
5
Ma
vel
Inf
14
91
6
en
isi
rm
na
op
or
1
tifi
tio
an
ge
Po
94
3
75
m
ca
n
ce
m
st-
en
ati
tio
Mo
Or
97
3
74
en
Sal
ton
n
nit
de
tes
Ma
Pl
10
3
57
rori
Se
na
an
0
ng
Ma
rvi
Figure 4. Process Priorities ge
ni for Operational Excellence
na
ce
m
ng
The normalized scores are calculated byge en
andividing the raw score by the total of all
raw scores and than multiplying by 100. m In can be interpreted as the percentage
ten
d
of effort or resources that should be focused Re on maintaining that process at
superior performance levels. It should serve t as a major input into an
so
organization’s budgeting and resource allocation ur processes. The last column
represents the cumulative normalized scores. ce
All
oc
ati
on
As can be seen from figure 4, the number one priories of an operationally excellent
company are its manufacturing related processes. Understanding its customer
requirements and managing its suppliers are next in importance. Getting these
three processes right will get them nearly half way there.
Following the same procedure as above, figures 5 and 6 show the process priorities
for product leadership and customer intimacy.

Cu
Ke
Ra
No
y
w
rm
m
Cu
21
25
Bu
ali
ul
Sc
st
6
sin
ze
ati
or
Pr
13
41
16
o
es
d
ve
e
od
6
m
s14
Po
12
55
Sc
%
uc
er
Pr
st-
3
tor
Re
Hu
10
68
12
oc
Sal
e
De
qu
m
5
es
vel
Cu
77
9
81
ire
an
sSe
op
st
m
Re
rvi
Pl
85
8
72
m
o
en
so
ce
an
en
m
ts
ur
As
90
5
39
ni
ter
Id
ce
se
ng
Ac
Ma
94
4
37
sen
tan
qu
nu
tifi
Ma
d
Inf
98
3
30
isi
fa
ca
na
Re
or
tio
ct
ge
Int
99
1
12
so
m
n
uri
n
m
eg
ur
ati
ng
Or
10
1
7
en
rat
ce
on
de
0
ted
All
Ma
Pe
10
0
1
rLo
oc
na
rfo
0
Figure 5. Process Priorities Ma
gis for Product Leadership
ati
ge
rm
na
tic
on
m
an
Success in product leadership depends heavily ge
sce on understanding customer
en
m
requirements. In fact it’s more important tMo than the product development process
itself. This result is entirely consistent with en the TQM admonition: “market in, not
nit
t
product out.” Next in importance are product ori development, post-sales service,
and HR management. Post-sales serviceng is important because I assumed that it
played a major role in determining fitness Cufor use, a very important customer
requirement for product leadership. HRst management is key in attracting and
o
retaining the creative people needed for product leadership.
m
er
Ke
Ra
No
Cu
Re
y
w
rm
m
34
19
qu
Bu
ali
ul
Sc
9
ire
Cu
sin
ze
ati
or
30
36
17
m
st
es
d
ve
e
4
en
o
s%
Po
26
50
14
Sc
ts
m
Pr
st-
1
or
Id
er
oc
Sal
e
en
Ac
es
tifi
qu
s
Se
ca
isi
rvi
tio
ce
n
10
Pr
18
61
od
9
Pl
15
69
9
uc
an
6
t8
Hu
14
77
ni
De
m
7
ng
vel
Ma
13
85
7
an
an
op
nu
5
Re
d
Inf
11
91
6
m
fa
so
Re
or
1
en
ct
ur
Pe
94
3
54
so
m
turi
ce
rfo
ur
ati
ng
Or
97
3
48
srm
ce
on
de
Ma
an
All
Mo
As
99
2
35
rna
ce
oc
nit
se
Ma
ge
Mo
Int
10
1
25
ati
ori
tna
m
nit
eg
0
on
ng
Figure 6. Process Priorities Ma
ge for Customer Intimacy
en
ori
rat
na
m
ted
ng
ge
en
Winning in customer intimacy requires excellence in all processes that directly
Lo
m
touch the customer. Most important aretgis understanding their requirements,
acquiring and retaining them, and maintainingen high levels of post-sales support.
tic
t
Conclusions for Part 2 s

At the start of this Part, I said that this approach is amenable to various levels of
detail. The examples used here are at the simplest level and provide a normative
model for process prioritization based on Treacy and Wiersema’s Value Disciplines
(figures 4-6). There are no real surprises in the normative model, and that’s good
news. The methodology passes this simple validation test.
The rich and counter-intuitive insights arise when actual strategies, stakeholder
requirements, performance, and constraints are added to the picture. But unlike
individual gut feel, how these collective conclusions were reached can be explained to
others by following the logic trail. After stripping away what turns out to be the
unessential elements of the two matrices, a much simpler picture unfolds, one that is
easily used to illuminate that logic path. I refer you to Analog Devices later version of its
Scorecard Story for such an example.

*This Part is an extension of a research project done for a major international


consulting company in 1995 and described in a working paper that I wrote that
year.
1 See for example: Yoji Akao (Editor), “Quality Function Deployment: Integrating Customer
Requirements into Product Design”, Productivity Press Inc., May 1990, ISBN: 0915299410
2 Michael Treacy and Fred Wiersema, "The Discipline of Market Leaders: Choose Your Customers,
Narrow Your Focus, Dominate Your Market", Addison Wesley Longman, Inc., 1994, ISBN:
0201406489
3 In this case, the possibility of creating “cells” within a process that are dedicated to a particular
customer segment should be investigated.
4 See for example: Shoji Shiba, Alan Graham, and David Walden, “A New American TQM: Four
Practical Revolutions in Management”, Productivity Press Inc., January 1993, ISBN: 1563270323, pg.
221.
5 Thomas H. Davenport, “Process Innovation: Reengineering Work Through Information
Technology”, Harvard Business School Press (October 1992) ISBN: 0875843662

 Part 1 return to top Part 3 


©
HOW TO BUILD A BALANCED SCORECARD

Part 3: Selecting Scorecard Metrics*

by
Arthur M. Schneiderman

A balanced scorecard contains a concise set of strategically important measures.


They capture the vital few drivers of the organization’s future success. I’ve called
these scorecard measures “metrics” and defined them as:
“Metrics are a subset of measures of those processes whose
improvement is critical to the success of the organization”
Once we have identified those processes, we face the challenge of selecting this
subset from a seemingly endless list of possibilities. Usually this decision is based
on what measures are already available or can easily be obtained, benchmarking
studies, or executive edict. But there is a much better way of doing it.
Classifying Measures
Measures of a process come in two flavors: I call them “results measures” and
“process measures,” although each has many aliases:
Results Measures Process Measures
Output Input
Outcome Driving
Lagging Leading
External Internal
Reactive Predictive
Static Dynamic
Effect Causal
Retrospective Prospective
Dependent Independent
Whichever set of names you choose, there is a very important difference between
them:
Results measures characterize the output of the process. They are
the consequences of actions taken within it. Since they are
descriptors of the output, they relate directly or indirectly to things
that a customer of that process can sense or measure.
Process measures, on the other hand, are the internal measures
from within the process that determine these results. In most cases,
the customer has little or no interest in or knowledge of them.
The SIPOC Method
One very useful model for generating candidate measures is called the SIPOC
method. SIPOC stands for
Supplier®Input→Process→Output→Customer.
In using this model, we usually start by identifying all of the customers of the
process and determine their complete sets of requirements. Here, customers
include both the external purchaser of the final product or service as well as other
internal processes that are part of the organization’s value creating activities (or,
as we say in TQM: “The next process step is the customer.”). Through a process
called “Voice of the Customer” we translate these requirements into results
measures that characterize the output of the process in terms that are both
meaningful to and measurable by the process executors. This translation is
necessary because the customer often describes their requirements in words that
do not have a direct process counterpart.
Next, we reverse this procedure by identifying all of the external inputs that we
need in order to execute the process, define our requirements for each of these
inputs, and ideally working with our suppliers, translate them back into a set of
specifications that are expressed in the supplier’s own language (“Voice of the
Supplier”).
Output measures and their associated quantifiable customer requirements
(Output®Customer) are clearly results measures. Measures associated with steps
internal to the process (Process) are obviously process measures. But what about
input and supplier measures (Supplier→Input)? Symmetry would suggest that since
they are results measures of the supplier’s value creation process, they must also
be results measures for our process. But is that necessarily so? In other words,
can a measure that is a results measure for an upstream process be a process
measure in a subsequent step? The answer here is a little bit tricky.
What is different about Supplier®Input measures are that we cannot improve them
directly from within our own process. We can only do so indirectly by changing
specifications, or suppliers, or through the redesign of our product and/or process
(“design for x-ability”). Their actual improvement is directly controllable only by
the supplier of that input. Often we have a limited ability to affect our supplier’s
control or improvement efforts (through partnering, for example) or to redesign
our products and/or processes. If that is the case, then we need to treat that
measure as a given (that is, a constant) and that measure’s classification into the
results or process category then becomes moot.
Generally speaking, to indirectly change an input measure requires the exercise of
different internal process within our organization - the supplier selection process
by which we choose suppliers, and/or the product and process design processes.
Even in that case, it is difficult to argue that they are anything but results
measures. In other words, unless we include within our process sub-processes for
supplier selection and product/process redesign, we must view these measures as
the result measures of other internal or external processes.
Any given process is part of a system of interacting processes. This is one of the
important reasons why it’s critical to have sponsorship of all improvement efforts
by someone who is in a position to set appropriate boundaries and constraints to
that effort.
The Math of Metrics
From a mathematical point of view, the last alias-pair is the traditional choice of
terms. For each results measure, we can write a symbolic equation that relates
this dependent measure (or more correctly, “dependent variable”) to the
independent ones:

In words, this equation simply states that the dependent measure, yi, is a function
of (i.e. depends on, or is determined by) all of the independent measures: x1, x2,
up to xn, where n is the total number of independent process measures.
For example, if the process were baking a cake, then one dependent measure
would be the “lightness” (in the Language of the Customer) of the resulting cake,
measured by its density (in the Language of the Process) in grams per cubic
centimeter. Here, y1 would be cake density and the goal is for it to be in a
specific range: not too light, not too heavy, but just right. What about the x’s?
The list would include oven temperature, cooking time, amounts used of the
various ingredients, freshness of ingredients, etc. These are the measures that are
included or implied in a clear recipe (or Standard Operating Procedure (SOP)).
Other dependent measures would include moistness, sweetness, and flavor for
example and we could create instruments that would measure each of them, as
well as establishing each of their associated target ranges. Each dependent
measure would depend on one or more of the many independent measures.
Determining drivers of change
In general, we are trying to limit variation of and/or improve dependent measures
in order to make our product more attractive to its customers. So let’s look at
how this equation changes with changes in the independent measures:

The symbol “∆ ” stands for a small change in the measure. So this equation says
that the change in a dependent measure is the sum of the weighted changes in all
of the independent measures. For very small changes in the measures,
mathematicians can show that this simple additive relationship holds in most
practical cases. The weights aij are sometimes called “influence coefficients” or
“impact parameters.”1 They represent the effect that a small change in the jth
independent measure has on the ith dependent measure. If aij is zero, than small
changes in its independent measure have no effect on that dependent measure. If
the value of aij is large compared to the other coefficients, then the dependent
measure is very sensitive to changes in that independent measure. It’s these
influential independent measures that are usually the targets for both process
control and process improvement and are therefore candidate scorecard metrics.
In process control, they are called “critical nodes.” By “locking” them, we assure
that variation in the dependent measures that they affect will be maintained
within a range that’s acceptable (but not necessarily satisfactory) to the customer.
For process improvement they indicate the likely root causes of the gap between
current and target results.
Unfortunately in practice, for large changes in the measures, this simple model is
often limited by two phenomena: “non-linearity” and “interaction.” Non-linearity
causes the influence coefficients to change (increase or decrease) for large
changes in their independent measure. Interaction occurs when interdependencies
develop between the various independent measures (they loose their
independence).
Some Simple Examples
The exact mathematical function takes on different forms for different dependant
measures and processes. Here are some examples:
Example 1:
The time required to execute a process from its start to its finish is called
its cycle time. If the various x’s are the cycle times, tj, for the internal
process steps that lie on the “critical path,” then the total cycle time, τ T,
is

Example 2:
The overall yield of a process depends on the sequential yield of the
internal sub-process steps. Let’s say that if the process were perfect (no
internal yield loss), it would produce 100 output units. If the actual yield in
the first step is 90%, then only 90 potential outputs survive it to the next
step. If that step’s yield were 80%, then only 80% of those 90 or 72 would
make it to the next step, etc. Therefore, the overall yield is given by:

For Example 1 above, all of the influence coefficients have a constant value of 1,
that is any increase or decrease in a critical path cycle time simply adds or
subtracts that change from the total cycle time. We could include non-critical
path sub-process cycle times, but their influence coefficients would all be zero
(until they became long enough to enter the critical path). On the other hand, for
example 2 it is straight forward to show that the influence coefficient is inversely
proportional to that sub-process’ yield (aTj=YT/Yj). What this means is that
improving a low yielding process step by 1% (for example from 25% to 26%) has a
greater impact on total yield than that same 1% improvement in a high yielding
process step (going from say 95% to 96%). In other words, lower yield process steps
have larger influence coefficients.
In many manufacturing environments, process or manufacturing engineers know
the mathematical relationships between the dependent and independent
measures. Usually they do this based on a physical or chemical theory of what’s
happening in the process. When this is the case, these experts can help in the
selection of those independent measures that are the principle drivers of change in
any given results metric. Once identified, these process metrics generally
represent the primary targets for improvement efforts and are tracked on the
appropriate scorecard.
Empirically Determining Process Metrics
As a rule-of-thumb, low influence coefficient independent measures vastly
outnumber the critical few (see Why Do Root Cause Analysis?). So trial-and-error
is not a viable option. Finding the process metrics in practice often ends up
requiring a mixture of both art and science.
When a theoretical equation does not exist or is not known, we need to resort to
empirical observation. Total Quality Management (TQM) employs teams that apply
the scientific methodology (the PDCA Cycle and the 7-Step Method) and basic
analysis tools (the 7 QC Tools) for identification of the root causes (process
metrics) of undesirable outcomes (results metrics). I’ve explained this process in
more detail in my article “Are There Limits to TQM?"
The vast majority of process improvements can be discovered using these simple
scientific tools. For more complex situations, three additional approaches are
sometimes used: heuristic techniques, design of experiments (DOE), and simulation
modeling.
Heuristic Methods
I once assisted a team trying to reduce defects in welded pipe used in the oil
industry. The particular defect was called “hook cracks” since they had the shape
of a fishhook. In stratifying defect data by shift, I discovered that one crew had
significantly lower defect levels than the others. I narrowed it down to the welder
operator and interviewed him in the hopes of documenting his “secret” so that this
best practice could be shared with the others. Each welder setting was specified
with a range determined by the industrial engineers. I asked him how he chose a
setting from within these ranges and his answer was “I can tell by the sound the
welder makes.” The other operators just tried to pick the mid-point. The IE’s
response: “Sound has nothing to do with weld quality.”
A few months later I visited an identical pipe mill in Japan where the operators
relied on an additional meter to adjust the mill settings. Using a microphone
placed near the weld site and connected to a measuring instrument (a spectrum
analyzer), their IE’s had determined that if the sound frequency was within a
certain range, a perfect weld was produced. Outside that range, the resulting
product was defective. What was the defect? No one remembered at first since
the discovery had been made several years before. Finding an old-timer they
came back with the answer: “Something called hook cracks.” Why should a good
weld have a certain pitch to the sound it made? There was no accepted
theoretical explanation; it simply worked. The Japanese IE’s were willing to
accept this heuristic observation while their American counterparts had discarded
it as scientifically baseless.
As another example, Kano2 observed an important non-linearity in the independent
measures that we call customer satisfaction. He classified the independent
attributes that drive customer satisfaction (such as particular product features,
price, availability, reliability, etc.) into four categories:

Im
In
de
fl
pl
On
Co
Th
pe
ue
ic
e-
ns
e
nd
nc
at
At
Ze
Ab
di
ta
m
en
e
io
tra
ro
se
m
nt
or
tnc
co
n
Mu
Ne
Pr
cti
be
en
e
M
ef
st-
ga
es
ve
lo
e
sio
th
Ne
Re
Do
ea
fic
be
tiv
en
(d
w
do
na
e
To place each independent measure inton’t utr
lat
su
ie
(di
e
ce
eli
a
es one of these categories, Kano developed
ldo
be
a structured multiple-choice survey tool.ca al
ive
re
nt
ssa
an
gh
thr
no He than created a heuristic “decoder
(sa
tt
(in
ly
re
ty
tis
d
es
ring” for determining the measure type from ter
tes the responses to paired questions. By
tis
er,
understanding current performance and wh dif
sm
pe
fie
de
no
)th
ho
dis the type of measure, the user could than
fie
fer
all
et
rank all of the independent measures bytcr r)
ld,
sat their improvement’s impact on customer
r)
e
en
or
he
ea
sat
satisfaction. th
isf
les
t)
rze
sin
isf
In general, heuristic methods are based on en
y,
sy,
ro
it’ empirical observation, not on any
g
po
inc
underlying mathematical theory. They are th
sinc often discovered through gut feel or
wi
sit
re
what I’ve called the “ins”: instinct, intuition, e
th insight, inspiration, innovation,
th
re
ive
asi
invention, etc. Their justification is therefore wo
er based on the fact that they simply
inc
asi
an
ng
work in practice. Although we preach “management rs
e by fact” it is important to
re
ng
d
pr
also acknowledge that in many instances, e
or and through mechanisms that we do not
asi
ab
inc
es
even understand, some people are able to no see through process complexity and
ng
se
re
en
identify the underlying drivers. tnc
xce
asi
Design of Experiments and the Taguchi Method to
e
ng
pr
a
pr
Another way to determine the influence od wicoefficients would be to vary each of the
val
od
independent measures over an appropriate th range while holding all of the others
uc
ue
uc
constant and observing its effect on the es inc
dependent measure. By doing this we
of
es
could also identify their range of independence. re
sig But in many instances, the
ze
sig
asi
nif
number of required experiments would be ro
nif impractical in both time and cost.
Fortunately mathematicians have devised ngefficient experimental sequences in
ica
at
ica
x
ntl
which we can vary more than one independent a
ntl measure at the same time. The
first to do this was Euler (1783) in what are y called Latin Squares. Today such
ythr
gr “Design of Experiments” or DOE. DOE is
experimental schemes go under the name es
gr
a popular tool used by six-sigma practitioners, ea and facility with it is usually a
ho
ea
ter
prerequisite for black-belt certification. ter ld
sat
dis
isf
sat
ac
isf
tio
ac
n
tio
n
Genichi Taguchi has attempted to demystify DOE by creating a somewhat
simplified procedure, that although not as mathematically rigorous usually gives an
adequate answer. In doing so, he followed the example set by Walter Shewhart in
his pioneering efforts (c. 1930) to bring statistical techniques to the shop floor
environment.
Most statistical software packages now include a DOE and/or Taguchi Method
capability (see for example Minitab, which is used in several six sigma initiatives).
However, even with current software support, their use is beyond the capability of
most improvement team members and requires expert assistance (e.g. staff
statisticians or six sigma black belts). Fortunately, the vast majority of
improvement efforts do not require this level of analysis in order to uncover the
relevant independent measures.
Simulation Modeling
In a process simulation, we attempt to dynamically reproduce its important
characteristics in a computer model. By “running” the model, we can understand
the complex interrelationships that exist within the process and test the effect of
changes. Simple simulations are often done using spreadsheets such as Microsoft
Excel or Lotus 123. For example, the columns in the spreadsheet might represent
sequential times (e.g. months or quarters) while the formulas for each period’s
cells depend on several results calculated for an earlier period. Many software
packages have specialized structures that make them particularly suitable for
certain types of process simulations.
Flowcharting is an essential step in process improvement. Several of the current
flowcharting software packages also include a simulation capability (I use Scitor
Process) that is very helpful in finding internal leverage points, particularly when
there are complex process flows and/or random variation is important.
Example 3:
A biotech company’s product involved a new medical procedure that
required special approval from the patient’s insurance company for
reimbursement. Long average approval times were having a serious adverse
financial impact on the company. Furthermore, the variation (standard
deviation) in approval times was also unacceptably high. What could they
do to improve this result metric? There were many theories as to the root
cause, most of which involved problems in someone else’s area. The
process flow was complicated by many alternate paths and frequent
“resubmittal loops.” A simulation of the process (using the Monte Carlo
method), based on probable paths at each process node explained both the
average and variation in approval time and pointed directly at the
independent measures whose improvement would have the greatest impact.
For complex processes that contain time lags as well as subjective variables,
System Dynamics modeling can also be very valuable (here I use Vensim). System
Dynamics modeling has the advantage that it can easily accommodate both non-
linearity and interdependencies, although its successful use does take considerable
modeling skill.
Example 4:
To successfully compete in a new market segment, an electronics company
needed major improvements in its delivery performance. Stratification of
late shipment data showed that it was significantly higher the last week of
the quarter. Again, there were many theories as to why. A system
dynamics model of the entire order fulfillment process (order receipt to
payment by customer) uncovered the answer and it was closely related to a
similar phenomenon known as the end-of-quarter revenue “hockey stick.”
Shipment linearity implies that with constant revenues, one-thirteenth of
the quarterly total accumulates each week. In many organizations, there is
a shortfall and the revenue falls below this linear goal. Miraculously, in the
last week or two of the quarter, a few heroes appear and through their
superhuman efforts the target is achieved and they are appropriately
rewarded. The shape of the resulting weekly cumulative revenue curve is
much like that of a hockey stick, whence its name.
The model explained what was happening. The added revenue at the end of
the quarter came from early shipments of large dollar orders not due until
the first few weeks of the following quarter. With limited capacity, this was
at the expense of many small orders due in that hectic end-of-quarter
period. Even worse, once started, this practice triggered a perpetual cycle
where only small quantity unfilled orders were due for shipment at the start
of the next quarter thus creating that initial revenue shortfall. The
solution: just as the cycle was started by a one-time action, it needed to be
ended the same way -- just stop doing it! Unfortunately, this results in a
temporary sales shortfall that only goes unnoticed if it is hidden by rapid
revenue growth. By phasing the practice out over several quarters, the
adverse revenue impact was minimized.
Without the use of a simulation model, it would have been difficult to
identify either the root cause or a palatable corrective action plan.
Choosing the Scorecard Metric
If improving a particular results measure is a strategic goal, then improvement
efforts should be focused on the process measures that will have the highest
impact on its improvement. They are usually the process measures with the
largest influence coefficients. What does that imply about choosing scorecard
metrics?
Most scorecards that I’ve seen are heavily populated with results metrics. No
doubt this results from the all too common management attitude: “I don’t care
how you do it, just do it!” I strongly believe that ALL scorecard metrics must be
directly actionable by their owner. Therefore, it’s the underlying process metrics,
not the results metrics that belong on a scorecard. If the improvement goals for
the process metrics are achieved, than we can be assured that the desired results
will follow, assuming we have identified these drivers correctly.
For example, dieters often tend to focus on their body weight (a results metric)
rather than its independent measures: exercise along with calorie, protein, fat,
and carbohydrate consumption. Nutritionists now believe that successful diets
involve lifestyle (aka process) changes that act on these independent measures.
Get them right and over time you will achieve and sustain your weight goal. I
wonder to what extent this results focus explains the statistic that 95% of dieters
fail to maintain their weight loss.
I would argue that results metrics only belong on a scorecard when their associated
process metrics are on two or more subordinate scorecards. In this case, the job
of the owner of the results metric is not its improvement, but sponsorship of the
subordinate scorecards. That sponsorship includes guidance, monitoring and
diagnosis, organizational troubleshooting, resourcing, communicating, etc. for the
individuals and teams responsible for the subordinate scorecards. There is an
important place for results measures, but it is mainly in the detection step in
process control, not improvement.
The Japanese have a saying “Focus on process, not on results.” In no case is this
truer than in the selection of scorecard metrics. The key to linking strategy to
action is not the balanced scorecard itself; it is this underlying process focus.
1 The influence coefficients are given by the partial derivative of fj with respect to xi.
2 See for example: Shoji Shiba, Alan Graham, and David Walden, “A New American TQM: Four
Practical Revolutions in Management” Productivity Press Inc., January 1993, ISBN: 1563270323, pg.
221.

 Part 2 return to top Conclusion 


1

You might also like