Professional Documents
Culture Documents
258
service engineers (CSEs) who repair the copiers and printers installed at
customer sites. Several design iterations took it from an experiment initiated
by researchers at Xerox's Palo Alto Research Center (PARC) to measure the
value of codified field experience, to a system that is now deployed worldwide. By focusing on communities, and how they share knowledge in
ordinary practice, we developed a set of questions and a methodology that
we hope will enable others to undertake similar communal knowledgesharing efforts. However, deploying any communal-knowledge system necessarily involves pushing changes within a corporate culture and dealing
with that culture's often antagonistic "immune system response"; understanding the Eureka experience, as well as the problems facing all knowledge
systems that have to be deployed in the real world, requires recognition of
these challenges.
Although our analysis recounts the story of Eureka in Xerox, it has a
highly general theme: large organizations face certain knowledge problems
when established doctrine - well-known policies, procedures, beliefs - prove
inadequate or even erroneous as guides to effective action, and traditional
organizational methods for generating new knowledge and managing its
distribution plainly won't suffice. In this kind of not uncommon situation,
supporting the creativity and inventiveness of those members of the organization closest to the problem, warranting the insights, and then sharing
those new solutions widely can make all the difference between inspired
success and abject failure. This is exactly the sort of inventiveness and
sharing that Eureka was designed to enable.
259
documentation and training that simply described the principles of operation of a product, leaving it to the technicians to determine the appropriate repairs. They moved toward what was called "directive" repair and
adjustment procedures. These are documented instructions in the form of
a decision tree. Each decision step was of the form "do the following setup
and test; make the following measurement (or observation); if the result is
A, do X; else do Y." The intuition embodied in this form of documentation
is that technicians need only be trained how to correctly use the documentation to diagnose any machine failure.
Our group at PARe had a background in Artificial Intelligence, and
in modeling electromechanical systems. In particular, we had expertise in
building programs that perform efficient diagnoses of machine faults given
an abnormal symptom and the ability to obtain observations/measurements
from the machine (de Kleer and Williams, 1987). As a test of their technology, we decided to build a model of one complex module of a particular
photocopier, and demonstrate how a program could provide guidance for a
technician in its diagnosis and repair. The hypothesis was that if we were
successful, the model-based expert system running on a laptop carried by
technicians in the field could replace the documentation and support an
optimized work process for isolating faults.
Our group succeeded in building RApPER (Bell et al., 1991), a modelbased expert system that used a model of the recirculating document handler
to provide optimized guidance in isolation of faults in that module. The
model was constructed with the help of Xerox engineers and technicians;
it captured the behavior of all the faults that could be found using the
standard documentation. We showed it to some technicians who said,
"That's amazing." "Would it really be useful if we had a complete model
for the machine?" we asked proudly. "Not really - though it is amazingrather like a bear dancing. It is surprising to see it do it at all."
Our group probed further for the issues that underlay the technicians'
negative response. The issues turned out to be rooted in the everyday
exigencies of work for technicians and the practices they had developed in
response. First, small optimizations of the time required to isolate any
known fault were not worth much. Only a small portion of an average twohour service call was actually devoted to diagnosis. Second, technicians
usually knew the procedures for the common faults and so required no
guidance. The thorniest issues, though, were more surprising. For many
products (those produced by Xerox's]apanese partners) there were no full
descriptions of their operation. Additionally, the diagnostic documents
were produced from experience in inserting faults into the machines in a
260
laboratory and then recording the symptoms. But the hardest problems in
the field for all machines were not those covered by the documentation they were new problems.
There are a number of reasons why new, unexpected problems occur in
the field rather than in laboratory tests. Xerographic machines work differently in particular environments - for instance, extremes of temperature,
humidity, and dirt. As machines age, components develop new failure
modes; and when a machine comes on the market, there may be new
components with unexpected interactions. With vibrations and other slow
acting processes, faults develop that are intermittent, and these are the
hardest to diagnose. These difficult cases caused most of the long calls,
resulting in average call time increases.
Accordingly, we decided to observe what technicians actually did in their
day-to-day practice. We started with technicians in the United States,
accompanying them on their service calls. Most of the time, technicians
would approach a machine, talk to the customer, and know exactly what to
do to return the machine to good working order. Once in awhile, they ran
into a problem that they hadn't seen before and for which there was no
answer in the documentation. They would try to solve these problems based
on their knowledge of the machine. This often worked, but sometimes they
were stuck. They might call a buddy for ideas, using their radios, or tum to
the experts - former technicians now serving as field engineers - who were
part of the escalation process. When unusual problems were solved, they
would often tell the stories about these successes at meetings with their
coworkers. The stories, now in the possession of the community, could then
be used in similar gatherings and further modified or elaborated (Orr, 1996;
Brown and Duguid, 1991).
This practice pointed to the importance of noncanonical knowledge
generated and shared within the service community. It suggested to us that
we basically could stand Artificial Intelligence on its head, with the work
community itself becoming a "living expert system" and with ideas flowing
upward from the people actually engaged in work on the organization's
frontlines (ef. Doubler, 1994; quoted in Ambrose, 1997).
The Colombus Experiment
261
the instructions in the manual. Technicians would show this visitor their
pristine service manuals. When it became known that he was truly a
researcher, however, and interested in what they actually did in the field,
they showed him their "second set of books" - marked up with their
annotations on the recommended processes, and with clever solutions
they had discovered (Bell et aI., 1997). Many technicians, particularly
those with the most experience, also had cheat sheets that captured their
invented solutions to undocumented problems. Technicians who were
starting on a new machine often asked more experienced technicians for
copies of these cheat sheets.
In a series of workshops in France, we asked technicians whether they
thought they had knowledge that was valuable to share beyond their workgroup. They were not sure, though they shared some stories about difficult
"problem machines" and how they repaired them. The responses of others
in the workshop were comments like, "Knowing that, I could have saved
five hours last week." Our team asked the groups whether they would be
willing to share these solutions more widely. Would a group that shared its
hard-won knowledge lose their performance advantage in benchmark comparisons to other groups? And would it be worth the time and effort to
document this local knowledge, so it could become portable, could travel
beyond the confines of the local work group?
Our team believed that this knowledge could have significant value, and
received the backing ofthe French service organization - including management and the "tigers," which were expert field engineers and played a key
role in the escalation process - to try an experiment. This required three
things: an initial knowledge base of tips; an easy-to-use vehicle for distributing this knowledge so technicians would turn to it; and an experimental
design that would let us conduct a valid test.
We elected to focus on a single product, the Xerox 5090, a high-speed
complex copier that had been in the field for a couple of years. Having the
tigers edit and validate tips that had been volunteered at the workshops,
adding more tips that the tigers themselves used, developed the initial case
base. There were between 100 and 200 tips in all, structured in terms of
Symptom, Cause, Test, and Action. A machine-presentable version of the
standard 5090 documentation was also made available.
The vehicle for delivery was a standard laptop, which was just starting
to be deployed in the United States but not scheduled for deployment in
Europe for at least another year. Engineers on our team wrote Co/ombus (the
French spelling of Columbus), a software package designed for ease of use
by the technicians. When the laptop was turned on, the application started
262
263
264
265
acceptance would also appear on each tip, providing open feedback for
how well the process is working.
Integration with work practice: With the tip database on the Minitel,
new screens were added to the call handling. These allowed CSEs,
when they decided they might be faced with a difficult call, to
search the database based on key symptoms taken from the call
record. They could also get directly into the database from a
customer's site if they could get access to the local Minitel. The
organization of the tips was simplified to Problem, Cause, Solution.
Entering new tips, and commenting and voting on the success of
existing tips would usually be done from the office or at home in the
evening.
Implementation and deployment: Since implementing this system was
a change from the plans that WCS had for the ultimate solution,
they declined to finance this countrywide experiment. In partnership, PARC and Xerox France paid for the implementation of this
Minitel based system. The software was ready in four months, but
the key issue was how should the field be encouraged to use it. A
champion from the French tiger group and a member of our group
went around the country talking with each group of CSEs about
service problems and how they could use this system to improve
themselves and others. They met with over 60 product leaders, and
helped train all 1300 French technicians. Participation was carefully
tracked, both in terms of the number of times that the database was
referred to, and the number of new tips entered. There were strong
differences among workgroups: in cities of the same size, use could
be high or quite low. Revisiting these regions, providing some training through examples of use, and reintroducing the purpose of the
system helped encourage broad participation. This strategy, can be
described as "hands-on participatory implementation," which was a
radical departure from the top-down, cascade model for program
implementation commonly used in Xerox.
Experience with use: The Minitel system opened with databases for three
products. By the end of the first year, CSEs had opened over forty
databases encompassing products from convenience copiers to highend printers. Also, more than one new tip was being added to the
database each day. Over 77% of the tips were being validated within
five working days, and 80-90% of the submitted tips were accepted.
Participation was extraordinary. Over 20% of the CSEs had
266
submitted a validated tip, and CSEs were consulting the tip database
on the average two or more times a week.
So what were the technicians getting out of these tip documents? What did
they think it was important to share? The tip content included diagnostic
information, since that was obviously crucial, but there was much more.
Some examples:
Workarounds
Comments on documentation
Before replacing the sensor P as the manual says in Repair Procedure R, inspect
reflector Y because it might be out ofalignment, causing this symptom.
Does this quite varied content make a difference to the company?
Comparing France to the rest of Europe, it went from being just an average
or below average service performer to being a benchmark performer. The
French service metrics were soon better than the European average by
5-20%, depending on the product. This unexpected "experiment" was
made possible by the fact that laptops were just being rolled out to Europe
around that time, and no other European country had the communication
infrastructure in place to adopt a wide-scale Eureka process.
On a more qualitative basis, we saw three different kinds of effects
on technician practice for France Eureka. In preparation for a call, using
Eureka was helpful in ensuring that a part likely to be causing failure could
be picked up before going to the customer site; this reduced broken calls.
On site, Eureka accelerated and improved the diagnoses that CSEs made for
the hard problems and reduced the number of times a call had to
be escalated, thus reducing the load on the technical support hot line for
recurrent calls about redundant problems. It also significantly reduced the
267
learning curve for technicians on new products that are introduced into
the field.
Spreading Eureka
The laptop was not widely used. While the corporation was committed to the computer as a platform to dispense technical information and manage work process in the field, technicians had long
depended on their traditional skills and practices and most were
skeptical about the need for this technology.
CD-ROMs or floppies were the primary means of distributing
information to the laptop resulting in sporadic and slow distribution.
Separate applications were used for call management (dispatching
and tracking all customer service requests), for the now electronically presented documentation (referred to as "e-doc"), and for parts
inventory and ordering management. Thus, there was no easy way to
make a connection between these independent applications and
searching the Eureka tip knowledge base.
How we dealt with these issues provides additional evidence for how technical problems necessarily interact with the social and community context.
We couldn't directly solve the laptop acceptance problem. But we hoped
that Eureka would prove to be valuable enough that technicians would
be motivated to use the computer in their work. To achieve this, our local
champion from Dorval Technical Support hit upon the idea of taking
existing technical information databases, largely created by field engineers,
that had been distributed in paper form to the field and converting them
to Eureka tip format, to our Problem, Cause, Solution framework. This
information was already valued by the field but hard to access or even keep
track of in paper form. (Over time, this has resulted in Eureka becoming a
digital portal to the field for all technical service information.)
268
269
270
Even more complex than updating the knowledge base, was upgrading
the software with changes. It required distributing floppy discs to everyone
in the field, and hoping that they were able to use them in a timely manner.
This created so many problems that a system for upgrading by downloading
the software components from the BBS was eventually implemented.
Finally, Eureka was no longer only an experiment initiated by PARC
researchers and with few management expectations. Even the Minitel
implementation, although it involved all French technicians, was still experimental in character. In Canada, Eureka was now an official, managementsponsored program, with certain expectations for service performance
improvement and with some financial support from one of Xerox's major
business divisions. But management had never dealt with a program where
the norm was that requirements emerged from user experience with an
initial prototype, and where rapid prototyping with a consistent pilot
group was conducted before any large-scale deplOYment. They expected
programs where requirements were first "locked in," with traditional methods for interviewing stakeholders to collect them, where large-scale deployment followed quickly on the heels oflimited pilot testing, and where widely
spaced, scheduled releases were used for any changes.
Given these beliefs, managers tried to set expectations for when they
wanted things done independent of our process for rapid prototyping and
debugging with extensive community involvement. This clash of design and
deplOYment methods - one expecting people to adapt to new tools as given
to them, the other expecting a coevolution of people's work practice and the
tools they were using to support and enhance that practice - had negative
results. When schedules could not be kept, the field was disappointed and
higher-level managers lost confidence in the ability of the Eureka team to
deliver; the team was discouraged.
Despite these conflicts, Eureka was successfully launched for twenty
products in only six months (starting in early 1997). For another four
months, the Canadian champion trained the product specialists, with the
specialists then training CSEs; a training video on CD-ROM was also
created and distributed reducing the need for this direct training.
Confronting the Organizational Challenge: Eureka Moves
to the United States
While Eureka had proven successful in France and Canada, these two
countries each had a relatively small service force and geographic area.
The United States was an entirely different situation: in 1997 there were
271
close to 10,000 technicians spread over a huge area. Most important, the
organizational dynamics were quite different in the United States: much more
bureaucratic and hierarchical, due largely to its size and thus complexity.
The U.S. rollout was gated by the deployment of laptops to all the CSEs.
Since the United States and Canada shared a common infrastructure, there
were no new technical issues. All the issues revolved around adapting the
process to a differently shaped organization, and to the scale of the service
force. Canada had around 100 validators; if the same percentage were to
validate tips in the United States, 800 would be needed. It is easy to create a
community of 100, where people can get to know each other and the skills
they bring to a task. The same is not true when scaled by a factor of eight.
Accordingly, U.S. service management decided that validation would take
place locally, in each CBU, with local groups selecting a validator for each
product family (the United States does not have product specialists associated with each CBU). As with Canada, the validators would take on the task
without any change in the rest of their workload.
The initial foray in the United States began in 1997 and involved a pilot
program in several locations. The pilot only took hold, however, where there
were local champions in the service force, as was the case with both France
and Canada. Beginning in]une 1998, Eureka CD-ROMs were distributed via
the mail to field managers, who were then expected to distribute them to
technicians in their work groups. The CD included a Computer-Based
Training module, but no hands-on training or direct engagement with technicians around the program was planned. This strategy was designed for mass
distribution of software or documentation, and cannot be uniformly effective
with sociotechnical systems like Eureka. In places where people knew about
the system and became champions, or where we engaged the local group it
quickly gained traction and was widely used; in other places, it became one of
twelve programs that had to be somehow implemented over the next quarter
and adoption was correspondingly slow.
An alternative, "participatory deployment" strategy for an organization
with 10,000 technicians like the United States would have started with the
pilot champions, the technicians and managers who were most knowledgeable about Eureka, going to a few other locations in the U.S. service
community and talking about their experiences and ideas. Because these
people were peers of technicians at those locations, they would be trusted.
This would have created more local champions and knowledgeable users,
who could then have gone to still more locations with a similar sharing
method; a direct, hands-on engagement around Eureka would have spread
across the entire country.
272
273
Having reviewed the story of how Eureka was designed and deployed in
three different service communities in its first years of operation, focusing
on the adaptation to their different organizational and technical environments, we now recount our efforts at that time to answer three important
questions about its everyday use, through careful field studies: How did
users respond to their experiences with Eureka? How did they adapt it to
their work practice? What were the barriers to more effective use?
274
275
276
277
278
that would both hold the knowledge base, and support the simple workflow
from authoring to validating to subscription management to downloading.
Simplification and stripping out the unnecessary were goals we hoped to
achieve.
Eureka was so successful that the system was now an official, "mainline"
program in the company. This meant that requirements from many places
poured in, and we were constantly trying to balance our belief that simpler
was better with corporate managers' beliefs that if Eureka was the answer
they wanted to be the one to generate the question.
Most notable ofthese was a service manager's intuition that the best way
to simplify the training for technicians was to embed the Eureka application
in the standard Internet browser (e.g., Internet Explorer). We tried to
explain that this would complicate the implementation significantly because
the software would then be dependent on exactly which version of the
browser, which operating system, and which service packs were loaded on
each machine. The response was that this was not as important as simplicity
of training. We explained that knowing how to browse only minimally
overlapped with knowing how to use the functionality of Eureka for searching, authoring, and so forth. But as manager of this major program with
years of experience working with the field, he insisted that the tradeoff
would be worth it. The program subsequently made that choice; unfortunately, as we carried out the Eureka II release, we found that there were far
more complications than imagined, and the delay was significant. The issue
is not a manager's mistake, but rather how decisions are made in the standard software development process, in contrast to the bricolage approach
with which we had started.
In addition, because the project now had the attention of top management, schedules were sometimes based on managers desires to achieve
certain goals by certain times, rather than on the work that was necessary
to achieve the desired state as indicated by the implementers. We understood the pressures that these managers felt, nonetheless this often led to
internal battles, and then to the appearance ofslipped schedules as managers
pronouncements could not be realized. It made us very aware of the difference between singing in the spotlight and singing in the shower.
In the years since Eureka II was deployed, however, that spotlight
eventually shifted back to the technician community, which is where it
belongs. PARC certainly played a leadership role in the conceptualization
and design of the Eureka system at the start and through its early years, but
as our story makes plain, the very idea of sharing new solutions to vexing
technical problems was invented by the CSE work community - it was a
279
basic feature of the everyday practice in their local groups. And it was
fundamentally different in character than using the service manual RAPs
(for recent and detailed comparison between practices using the RAPs
versus the Eureka tips, see Vinkhuyzen and Whalen 2007; Yamauchi,
Whalen, and Bobrow 2003). Having observed this practice in the field,
PARC collaborated with the community to develop technology that could
dramatically enhance this practice, making it possible to share ideas all
around the world. But from the beginning, the technician community
managed the actual tip sharing process, with peer validation as a key element. And it was not too long before community leaders took over the
technical maintenance and future development of the system from PARC,
which was always our goal. Accordingly, Eureka is community owned and
maintained in a way that few if any corporate knowledge systems are today.
We finnly believe this is what has made - and continues to make - Eureka so
remarkably and perhaps even uniquely successful.
Over the past eight or so years, under the leadership of CSEs, Eureka has
of course continued to evolve in design and grow in size and participation and in value. By 2005, for example, the number of solves documented worldwide was nearly 800,000. Technicians across every part of the globe where
Xerox products are installed now use the system, and it is estimated that this
pervasive use saves the company close to 20 millions dollars each year in
service costs. With respect to size in particular, there are over 100,000 tips
registered in the database.
This has led to new concerns. For instance, how do you maintain a da~
base of this size and, more important, how do you deal with the inevi~ble
redundancies and complexities for a system of this scale? Although the tips are
still organized by product family, and so the size ofany single family's database
remains manageable, searching for and finding the "right" lmowledge amongst
all this information - recall that e-docs and all other technical information was
integrated into the system in Eureka IT - is a challenge. Nevertheless, CSEs
have found ways to deal with these challenges. For example, to handle apparent
redundancies, technicians have elected to keep most of the tips in question in
the database, finding that whatever the similarities, each tip makes its own
distinct contribution in some fushion. Moreover, each tip represents the efforts
of a community member, and they are reluctant to delete the product of that
work. There are some efforts being made at combining information, though,
preserving the recognition earned by all contributing parties. This is more
evidence of the complete intertwining of the technical with the social with
Eureka, the knowledge system and the community that creates and culturally
"owns" that knowledge.
280
281
282
they felt they had to design a unifonn, worldwide solution. The power was
for the most part wrested from the users' hands.
Related to this, management was familiar with distributing corporate
programs in a hierarchical ("cascading") fashion. Eureka works much better
when peers mentor each other on the system's uses. The French and
Canadian experiences demonstrated the value of participatory deployment.
Unfortunately, as we have explained, this was not how the rollout in the
United States, by far the largest Xerox organization, was done. While the
U.S. deployment was eventually very successful, the problems that developed because technicians weren't closely involved in the process (especially
those technicians who had the most experience with Eureka and could have
been an invaluable resource for their coworkers in other parts of the
country) hampered the project's achievements over the first year and more.
This can be explained in large part by recognizing that WCS, and Xerox
Corporation more generally, has emphasized cost savings in field service,
and has viewed the service organization as a cost center. As a result, significant investment in service is discouraged, unless it is matched at the time of
investment with equivalent cost reductions (instead ofseeing investment as a
front-end expense that will have a back-end, long-tenn payoff). This had
consequences for Eureka deployment, and for how the program now operates. When the service organization was asked to change how it viewed the
importance of field knowledge, the field technicians who were charged with
contributing to the new knowledge base and maintaining its quality were not
given enough time to learn and engage in the change activities, because this
would have required a significant investment in time (and thus money). As a
result, there was not enough time to reflect on and practice the new way.
Although the president of Xerox said that Eureka was a key program, the
organization did not want (or were not able) to make the necessary investment for building the kind of process infrastructure - such as training
resources and time - to make it more successful. More important, with
respect to program operation, technicians were expected to author tips
and validators were asked to provide rapid turnaround and validation of
submitted tips - a critical role in the process - without giving any of them
relief from their current workloads.
Finally, we have to ask; did Xerox become a better Learning Organization
as a result of the Eureka project? One way to answer the question would be
to point to the impact on field-service perfonnance, and to the degree to
which most technicians use the program on a regular basis in many different
ways, including especially learning new ideas and approaches to machine
repair. The service organization, taken as a whole, has also been transfonned
283
284