You are on page 1of 71

VOL 5 NO 4

Oct - Dec 2007


STRATEGIES FOR
ENTERPRISE IT
SETLabs Briefings
Advisory Board
Aveejeet Palit
Principal Solutions Manager,
System Integration Practice
Gaurav Rastogi
Associate Vice President,
Global Sales Effectiveness

George Eby Mathew
Principal Researcher,
Software Engineering &
Technology Labs
Kochikar V P PhD
Associate Vice President,
Education & Research Unit
Raj Joshi
Managing Director,
Infosys Consulting Inc.
Rajiv Narvekar PhD
Manager,
R&D Strategy
Software Engineering &
Technology Labs
Ranganath M
Vice President & Head,
Domain Competency Group
Srinivas Uppaluri
Vice President & Global Head,
Marketing
Subu Goparaju
Vice President & Head,
Software Engineering &
Technology Labs
Pro-active, Not Reactive IT
Modern day business is beset with changing operating paradigms. Future is fraught with
more risks and more uncertainties than ever before. Myopic strategies have led to a further
complication in the way business risks are being mitigated. When it comes to enterprise
IT, hell is let loose. Why? Because IT is nobodys child in an enterprise. While everyone
understands the need for IT, questions arise on the tangible value created by it. More often
than not, IT managers are constrained to work with the burden of proving their business
peers ITs and consequently their own raison d tre. This leads them to react to business
exigencies than pro-actively solicit support to bolster existing enterprise IT.
Rationing of IT investments to different enterprise projects, alignment of IT with business
and measurement of business value of IT demand a pro-active IT strategy than a reactive
one. Remember, a stitch in time saves nine! This issue of SETLabs Briengs is a collection of
some past articles which we feel are pertinent in an enterprises IT journey, even today.
Pro-active organizations have long understood the need for legacy modernization. Reactive
ones have bundled their beds to quickly embrace it. Lethargic ones are still guring out
a way to do so. Literature on modernization is over a decade old. However, the need to
understand the how to still remains. We present to you two papers centered around this
question. While one of them directs you to two choices available for legacy integration,
another provides a compelling case for the adoption of SOA to minimize risks inherent in
modernization initiatives.
Measuring business value of IT has always been high on the agenda of IT naysayers.
Strategic alignment of IT with business has led to the anything that cannot be measured,
can be done away with philosophy. Whether it is a KPI or a dashboard or some evaluation
metric, it does not matter, as long as the value of IT can be measured. Yet once again we
have two papers around this idea.
Proactive IT strategy calls for anticipating IT risks and managing them effectively. A
Standish Group Report, published early this year reveals that just about 35 percent software
projects could be completed in time and within allocated budgets. Is this gure inspiring
enough to commit any investments into IT? While there is no gainsaying the fact that there
has been a remarked improvement in IT project completion rates over the last decade or so,
the rate at which enterprise IT initiatives are inching toward success is worrisome. In one
of the papers, the authors take you through an Integrated Risk Management Process and
validate its importance through a couple of interesting case studies. In yet another paper,
we discuss the larger issue of aligning IT investments with business strategies through the
adoption of a portfolio prioritization approach.
What better way to explain the need for proactive strategies than introducing a novel
paradigm of experience co-creation? This issues spotlight is on Prof. Venkat Ramaswamys
contribution that convincingly posits the strategic need for using IT as an enabling platform
to partner with customers in extracting business value across a portfolio of experience
environments.
I hope that you enjoy the papers selected for this issue. Please do write back to us with your
views on how useful you felt this collection was to you.
Praveen B. Malla PhD
praveen_malla@infosys.com
Editor

3 Research: Legacy Integration: Which Approach should your
Enterprise Adopt?
By Manish Srivastava
Integration of legacy systems has emerged as one of the top CIO pain points in recent
years. The author examines the need for a decision framework that incorporates and
probes two successful and mature legacy integration techniques Enterprise Application
Integration (EAI) and Operation Data Store (ODS).
Case Study: SOA Saving Grace for Legacy Applications
By Anubhuti Bharill, Biji Nair and Binooj Purayath
In this paper, the authors espouse the case for modernization through a case study.
Service-oriented integration, they assert provides a low risk option in modernizing
enterprise systems.
Viewpoint: SOA and BPM: Complementary Concepts for Agility
By Parameswaran Seshan
Effective utilization of the complementarities existing in BPM and SOA can help derive
substantial business value. The author digs through these complementarities to build a
case for an agile enterprise.
Practitioners Solution: Towards Enterprise Agility Through Effective
Decision Making
By Sriram Anand and Jai Ganesh PhD
Facilitating the executive decision making process through an innovative use of emerging
technologies, pays rich dividends. The authors propose an Enterprise Digital Dashboard
architecture framework, which builds on the concepts of web services and shared data
services.
Framework: Managing Risks in IT Initiatives: A CXO Guide
By Dayasindhu Nagarajan PhD, Sriram Padmanabhan and Jamuna Ravi
Ineffective risk management can scuttle the realization of the business value of IT. As
external program risks have a large influence in the successful outcome of IT projects,
the authors propose an integrated approach to risk management that can alleviate risk
pressures on project deadlines and budgets.
Methodology: Measuring the Value of IT in an Organizational Value Chain
By Jai Ganesh PhD and Akash Saurav Das
The true value of an IT investment must reflect the value accrued to all stakeholders in
todays business environment typified by inter-organizational setup of multiple firms,
which in turn are parts of value chains of different organizations.
Practitioners Perspective: Can IT Investments be Optimized in the
Insurance Industry through Portfolio Prioritization Approach?
By Sanjay Mohan and Siva Nandiwada
Drawing from their experience, the authors suggest a portfolio approach to IT
governance. This they assert would help organizations achieve reduced variability in costs
and improved predictability in benefits.
Spotlight: Co-creating Experiences of Value with Customers
By Venkat Ramaswamy
The interactive space between a firm and its customers has the potential to create business
value. The basis of value for the customer shifts from a physical product to the total
co-creation experience. Prof. Venkat Ramaswamy of the University of Michigan, builds a
compelling case for building experience co-creation platforms.
Index 65
47
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
25
35
43
53
17
9
4
Companies must learn to build platforms that enable
experience co-creation processes across the portfolio of
experience environments. In this view, IT platforms become
a strategic enabler of experience co-creation processes.
CXOs should empower project managers to operate like
entrepreneurs. Success in IT initiatives can be achieved
through them as they take a holistic perspective of risk and
its management at each phase of the program right from
conceptualization to execution.
Jamuna Ravi
Vice-President
Banking and Capital Markets
Infosys Technologies Limited
Venkat Ramaswamy PhD
Co-founder
Experience Co-Creation Partnership
3
Legacy Integration:
Which Approach should your
Enterprise Adopt?
By Manish Srivastava
A toss-up between enterprise application
integration and operation data store
L
arge organizations, despite generous doses
of adoption of newer technologies, still rely
on legacy systems to run their critical business
processes. These systems encapsulate complex
business processing logic that has been built over
many decades. However, in the last few years
the time-to-market pressures for new products/
packages and the need to offer these through
multiple channels are drawing focus to solutions
enabling legacy data access. IT managers have
basically two choices re-engineer existing
systems to align with new business needs or
integrate to reuse complex business logic and
data stored in the legacy systems. The costs and
risks involved in re-engineering the systems on
to newer platforms often far outweigh the
business benets. Hence, IT managers are
continuously seeking legacy integration
solutions to cater to rising business demand
for agile and integrated systems, within the
constraints of reducing IT budgets. Here, we
analyze at length on the need for the legacy
integration and the approaches to achieve
the same.
NEED FOR LEGACY INTEGRATION
The need for legacy integration is driven by the
changed business scenario. Enterprises are
continuously seeking differentiators in the
ways (or channels) they service their customers
and the products/packages they have to offer.
There is an interesting battle with distinct attack
and defend cycles, being played out in the
market [Fig 1]. A company analyzes its internal
data and customer feedback to develop a new
product/package. It attacks its competitors
by putting out the product/package to attract
new customers and increase its market share.
In retaliation, its competitors race to catch up
with it by launching similar products to negate
the advantage and retain their customer base.
There is a signicant cost associated with
this battle for all the players in the market. The
success depends on primarily on two factors:
The value of the product/package which
in turn depends on correctness of the
trend data
The speed and the cost efciency of
ideation, development, marketing and
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
4
servicing of the product/package.
The above factors impose the following
requirements on the new age systems:
The ability to collate customer needs and
feedback (Data Collection)
The ability to provide integrated and
reliable data for trend analysis based on which
new and innovative products/ packages
can be designed (Product Design)
Reduced time to market new products
and services (Product Launch)
The ease of integrating with new channels
to enable cost effective delivery of
products and services to customers
(Product Delivery and Access)
The ability to provide a single integrated view
of customer data from diverse line of
business systems to enable coherent service
delivery and provide cross-sell opportunities
(Product Support and Marketing).
While the new players in the market
build their applications to meet these new
challenges, large organizations that run most
of their businesses via batch oriented or green
screen legacy systems, have to address the
additional challenges of integrating with their
legacy systems. Given the plethora of integration
solutions available in the market, short term
technical issues in integration may not be an
overwhelming challenge. However, ensuring
long term cost efciency, catering to the ever-
changing business needs and growing volumes
requires a well-thought out integration strategy.
LEGACY INTEGRATION CHALLENGES
As most organizations have several lines of
business, which evolved during different
points in time, their systems are aligned to the
LOB needs and are on different technologies
and platforms. Further these systems are not
well integrated, leading to duplicated and
distributed customer data. This could lead to
inefcient service delivery like the customer
not being recognized by a call center. In many
real world applications the problems are further
exaggerated by incoherent duplicated data that
make trend analysis very difcult, impacting
quality of new products and packages.
As the data is aligned to the LOB needs,
signicant effort is required to extract the data
and transform it to an integrated customer centric
view for use in customer focused applications.
Agile
Systems
New Product
Idea
Launch
Product
Customer
Needs/ Trend
Analysis
Quick
System
Implementation
Attract
Customers
Attack
Attack
Defend
Cycle
Defend
Launch
Product
Counter
Product
Idea
Retain
Customers
System
Implementation
Figure 1: Attack - Defend Cycle Source: .NET Center of Excellence, SETLabs
5
The batch nature of many legacy applications
is another roadblock that an integration architect
must address. While the online screens are used
for data input and queries, most of the validation
rules are embedded in the batch applications.
Integrating with such systems is not an issue if the
real time responses are not mandatory. However, this
often limits reuse of the validation rules in a real-time
integration mode and using the same batch processes
may impact service delivery quality and time.
Most lines of business applications are
typically developed within a departmental
boundary. These systems are built on dissimilar
products and platforms due to lack of enterprise
level standardization and various other factors.
Each application exposes integration interfaces
via its choice of integration middleware. New age
applications such as CRM and data warehouse
are often forced to work with these legacy systems
via a multitude of integration technologies. To
make matters worse, most of these interfaces
were designed for different purposes and are not
aligned to the immediate requirements.
SOLUTION OPTIONS
The majority of successful legacy integration
techniques used by organizations can be
classied into two categories:
1. Enterprise Application Integration
2. Operation Data Store
A third type, which is also quite prevalent, is a
combination of the above two.
ENTERPRISE APPLICATION
INTEGRATION
Enterprise Application Integration (EAI) refers to
the technology that enables data transformation
and routing, data and event integration, and
provides customizable adapters for cross
application and cross-platform communication.
The technology has been in existence for the
last 12-13 years and has been continuously
evolving. In the last 5-6 years, most EAI vendors
have introduced elements of process modeling;
process automation, legacy integration and
HTTP/XML based B2B connectivity to evolve
themselves as BPI (Business Process Integration)
vendors. The third wave in the EAI space is
called BPM (Business Process Management).
BPM tools provide features like workow,
advanced process modeling, business analytics,
pre-dened industry specic templates, and
provide support for Web Services based
integration. Major players in the EAI space are
Complex
Business
Logic
LOBA
Customer
Information
Line of Business Legacy Applications
Complex
Business
Logic
LOBA
Complex
Business
Logic
LOBA
Complex
Business
Logic
LOBA
Customer
Information
Customer
Information
Customer
Information
Interfaces are exposed via variety of
mechanisms.
Interfaces are not aligned to front tier
application needs.
Some applications are batch oriented.
Applications are on multiple platforms.
Customer information is distributed in
various systems.
Duplicate Data adds complexity.
Data is aligned to line of business
needs which may or may not be in
synch with customer view.
Figure 2: Line of Business Legacy Applications Source: .NET Center of Excellence, SETLabs
6
IBM, TIBCO, Vitria, SeeBeyond and Microsoft.
A pure EAI solution would enable real
time or near real time access to legacy systems.
As illustrated in Figure 3, the solution revolves
around a message hub or an information
bus that is connected to various integrating
applications by the application adaptors.
Applications communicate with each
other by sending messages on the bus. Many
commercial information buses provide additional
services such as security, transactional support
and message storage.
The benets of this integration model
include up-to-date and coherent information in
interacting applications. The drawbacks include
high cost of implementation and maintenance
of the middleware infrastructure. To justify
the high spend, the integration solution needs
to encompass a signicant number of business
processes and applications. To reduce complexity
and cost of ownership of the integration layer it
is imperative to keep the design of the adaptors
simple and messages reusable. Often, due to the
inexibility and criticality of the legacy systems,
architects are forced to take a blanket non-
invasive approach towards the legacy systems.
This leads to excessive transformation and
complex code in the integration layer. In due
course, these surface as high cost of maintenance,
increased time to market for change requests and
performance issues.
OPERATIONAL DATA SOURCE
Many organizations had chosen to deploy an
Operational Data Store to meet their integration
needs. The solution involves using ETL (Extract
Transform and Load) programs to extract
the relevant information from the legacy
systems, transform it using custom scripts
or transformation tools and load it into the
Complex
Business
Logic
LOBA
Customer
Information
Complex
Business
Logic
LOBA
Customer
Information
Adapter Adapter Adapter
CRM
Information Bus
Transformed
Transformed
Aggregated
Figure 3: EAI Solution to Legacy Integration Source: .NET Center of Excellence, SETLabs
7
operational data store. The ODS is aligned to the
need of the end applications and provides an
integrated view of the enterprise data.
The benets of the ODS type of integration
include minimal changes to source applications.
The model provides an excellent platform to
develop other applications like decision support
systems and data warehouse. However, success of
this model depends on various factors including
good enterprise data model. As the replication
of the data from the operational data store
and legacy applications are batch based, this
cannot be used for business processes that need
real time updates. Increasing data volumes and
shrinking batch window due to need for 24x7
uptime requirements are making the batch based
updates more and more unviable. Duplicate
data in the legacy systems add to the complexity
as this requires the updates made in ODS to be
replicated to multiple legacy systems.
Complex
Business
Logic
LOBA
Complex
Business
Logic
LOBA
Customer
Information
Transform
Extract
Extract
Transform
Operational
Data Store
ETL Tool
Customer
Information
Figure 4: An Illustrative ODS Solution Using ETL Source: .NET Center of Excellence, SETLabs
Complex
Business
Logic
LOBA
Customer
Information
Adapter Adapter Adapter
Information Bus
Transformed
Transformed
Aggregated
Complex
Business
Logic
LOBA
Customer
Information
CRM
Operational
Data Store
Figure 5: A Mixed Approach using EAI and ODS Source: .NET Center of Excellence, SETLabs
8
MIXED APPROACH
To solve the need for real time or near real time
updates and address the issue of multiple data
updates, some organizations have employed a
mixed EAI/ODS solution. While there could be
many variations of this solution, one example is
illustrated in Figure 5.
Any update on either the LOB
applications/database immediately res a
message to the EAI bus. The message is picked
up by the ODS adapter and ODS is updated.
Similarly, any changes in ODS sends a message to
the bus that is picked up by the LOB applications
adaptors, which make the required changes into
respective systems. The mixed EAI/ODS model
is the best of both world solution enabling real
time updates plus a centralized integrated data
view of the enterprise.
The choice of integration architecture
depends on various factors including business
need, costs, type of legacy systems and scope.
In the long term, cost is a key success
indicator for an integration solution. The
recurring cost of ownership is directly linked to
reusability and exibility of the services exposed
by the integration layer. Some of the best practices
that could help address these requirements are:
Clear denition and evangelization of
services exposed
Exposing services through platform
independent interfaces like Web Services
and messaging
Use of workow tools for better
separation of frequently changing ow
logic and more static application
components
Avoid using EAI infrastructure for point
to point integration, move towards
service oriented integration
Re-aligning legacy interface to reduce
complexity of the transformation and
aggregation in the integration layer.
Table 1: Comparison of Different Approaches to Legacy
Integration
Source: .NET Center of Excellence, SETLabs
Sl. No.
1
2
3
Integration Architecture Benets Drawbacks
EAI Up-to-date and coherent information.
Real Time Integration.
High Cost of deployment and
maintenance for the integration
middleware.
May require changes to legacy
interfaces and systems to meet
Quality of Service goals.
ODS Non-Invasive
Platform for other applications like
DSS and date warehouse.
No Real time Updates.
Increasing pressure on update
window due to 24x7 needs and
increasing data volumes.
Mixed Up-to-date and coherent information.
Real Time Integration.
Platform for other applications like
DSS and date warehouse.
May require changes or additional
programs to ensure data
synchronization in legacy systems.
High cost of deployment
and maintenance.
9
SOA : Saving Grace for Legacy
Applications
By Anubhuti Bharill, Biji Nair and Binooj Purayath
Piloting a service oriented legacy modernization
effort first, can shorten the path to success
A
s one of the core benets, Service Orientation
envisions enhancing the integration of
existing, heterogeneous systems in a exible,
collaborative manner. A signicant portion of
the systems that should participate in Service
Oriented (SO) Integration were developed in a
computing era dominated by mainframes. These
systems, termed as legacy applications in todays
parlance, encode critical business rules and
according to current estimate, comprise of more
than 250 billion code lines and it is increasing [1].
Enterprises continue to maintain their signicant
investment in legacy systems and strive to reuse
their critical functionalities. Hence, any credible
service orientation strategy must address the
essential reality of enterprise IT landscape dotted
with numerous legacy systems.
It is worthwhile to consider a detailed
view of an enterprise initiative aimed at
demonstrating the business value of service
enablement. Even before embarking on the big
program, the organization we look at makes a
small investment to assess the risks and acquire
sufcient learning that can be applied when the
program is nally underway. Concurrently, the
initiative targets at bootstrapping a technical
environment, say a sandbox, and experiments
on the risky aspects of modernization.
CHALLENGES OF LEGACY INTEGRATION
Integrating functionalities of legacy systems
often presents many cross cutting concerns and
challenges. Starting from the task of dening an
explicit business case for service orientation to
the ner details of implementing system level
transaction demarcations for service calls many
such complex situations present themselves that
need careful planning and execution.
One of the challenges of legacy
integration arises from a mismatch between the
essential characteristics of legacy systems and
the expected behavior of participating systems
in a collaborative framework [2]. For example,
a monolithic legacy implementation with
limited modularization may not meet the more
ne-grained, service oriented behavior typical
of participating systems in a service oriented
environment.
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
10
As another example, exibility of
a Service Oriented Architecture comes from
related but slightly varying functionalities
offered by the service providers. While
it is common to nd such polymorphic behavior
in modern object-oriented systems, legacy
systems are often not designed for such
features, so realizing similar functionalities can
be costly both in terms of time and effort.

THE ORGANIZATION UNDER STUDY
The organization under study is a leading
global bank that had envisioned to embark on a
legacy modernization initiative to align with the
expected rapid business growth and the strategic
needs such as business agility and reduced time
to market. Some of the key business drivers of
the modernization were:
Agility and Speed to Market - introduce
new products in a reduced timeframe
Higher customer satisfaction by
improved operational efciency - result of
straight-through processing and process
automation through SO Integration
Availability of accurate and real-time
data - eliminate the inconsistencies
and redundancies between back ofce
systems and customer channels.
One of the modernization options was to
harvest business services from the core
enterprise applications and adopt SO
Integration while continuously aligning with
the Future State Architecture (FSA). The
objective of the initial effort, discussed in
detail here, was to validate this modernization
option, more precisely:
Evaluate the feasibility of SO
Integration with a small investment,
before deciding on making a bigger
investment
Establish the business value of service
oriented legacy modernization
Establish the scale down version of
technical environment to bootstrap the
major activities
Establish the technical feasibility of the
modernization.

SPECIFIC CHALLENGES IN CONTEXT
There were three specic challenges in this
organization. One had to do with the mindset of
the people involved, another had to do with the
Typically organizations, as in this case, embark on legacy
modernization initiatives when faced with rapid business
growth and to balance strategic needs such as business
agility and reduced time to market
11
knowledge about the applications under
consideration and the third had to do with
managing the transition to the new environment.
The legacy of the legacy: Majority of the IT staff
were of the opinion that a replacement, which
was costly, was the only option available for
legacy modernization. The business stakeholders
shied away from costs and preferred to continue
with the existing platform while acknowledging
the system limitations and its impact on day-to-
day business. These ideas of system replacement
and forbidden cost had been urban legends
for more than seven years. Due to this, the
stakeholders had mixed opinion about the
success of modernization.
Knowledge about the legacy: The core legacy
applications were more than a decade old,
adopted from a product vendor, and had grown
organically. The rm wanted to harvest the
business knowledge from existing code rather
than depending on the product vendor (with
whom the rm wanted to sever its relationship).
Additionally, the documentation available was
not up-to-date.
Introducing change into a well-established
ecosystem: Introducing unfamiliar concepts
of modernization and SOA to an enterprise
offered considerable challenges. The task
of establishing the integration environment
(sandbox) exposed many of these issues;
many driven from the size of the
organization and existing departmental
structures. As an example, collaborating
with a wide array of internal technology
organizations, external vendors and
developers with differences in their philosophy
presented unforeseen challenges to effective
communication, planning and execution of the
development activities.
ASCERTAINING THE COMPLEXITY AND
SUCCESS OF MODERNIZATION
This initial effort was envisioned to provide an
understanding around a set of inuential factors
based on which the success of the modernization
depended. Some of these factors were:
Integration with a state-of-the-art Web
Interface/Dashboard: Is it viable to
integrate with an intuitive user interface
that can enhance user productivity and
reduce the time for transaction
processing?
Code/Asset reusability and introduction:
How much of the existing software assets
and code elements can be reused without
Collaboration in legacy modernization initiatives remain
a major issue - development activities run into unforeseen
hurdles originating from the large number of internal
technology organizations and external vendors
12
modication and how much needs to be
introduced to achieve the service oriented
integration?
Stateless nature of services: Considering
the spaghetti implementation of user
interfaces, workow and business
processing logic, how much of stateless
principles can be adhered to?
Separation of human workow and
processing ow from the core processing
steps: How to achieve the intended
separation of the above concerns so
that those can be re-implemented on a
exible, standard-based Business Process
Management platform?
Usage of standardized Web service
technologies: Validate the existing
support of Web services standards such
as WSDL, SOAP, XML and so on, on the
legacy environment.
Loose coupling: The ability to layer
Layer Components Component Interaction Realization
Presentation Financial transaction
screens for user 1 &
user 2
HTML/JSP Modules in
J2EE Web Application
Business Process Financial Transaction
Process step 1 & 2
carried out by user 1 &
2 respectively
BPEL scripts dening
the business process
application
Components Modied COBOL
programs for the
selected nancial
transaction
CICS/Cobol programs
with SOAP interfaces
Summation or
Aggregation Logic
Operational Systems Financial system
implemented using
CICS/Cobol by a
product vendor
CICS/Cobol programs
interacting with
information store
Figure 1: The Component Layering in the To-Be
Architecture
Source: Infosys Research
the existing code to different layers of
concerns so that they are independent,
but interoperable. Such concerns are
presentation/user interfaces, navigation
of user interfaces, human workow,
business processing ow, core business
processing steps and data store access.
Service granularity and aggregation:
How to decide the right level of
service granularity to be exposed in
the legacy system; how and where to
deploy the aggregation (ner to coarse
granularly) logic? Options were i) use
an orchestration/choreography layer
outside legacy system, and ii) use legacy
constructs to create the coarse grained
wiring.
Impact on the presentation layer or
user experience due to separation of
concerns: The intended separation
of user interfaces and other concerns
13
were expected to introduce some
limitations/differences [over the
existing] to the user interaction, such
as deciding on persistence points in the
human workow. What are some of the
limitations/differences that a separation
of concern may bring along?
A TWO-PHASED PILOT APPROACH
This exercise was conducted in two phases.
Phase 1 Planning included the following steps:
Dened a set of architecture aspects that
need to be proved based on the factors
in deciding the value of modernization
Dened and designed i) a set of Web-based
UI as a prototype of a Web Dashboard,
ii) a set of service interfaces/contracts,
iii) Logical and Physical Architecture
[Figures 1 and 2]
Identied a candidate set of code assets
and performed an analysis on the above
for understanding the impact of future
adherence to the SOA standards [3].
Phase 2 Execution included the following
steps:
Established a sandbox - a scaled down
version of physical architectures,
established the system interfaces, and
deployed the service oriented components
in the runtime environment
Analyzed and validated the inuential
factors in assessing the complexity of
FSA and SO Integration
Documented the learning both technical
and operational, so that it can become
a version of physical architectures,
established the system interfaces, and
deployed the service oriented components
in the runtime environment
Analyzed and validated the inuential
factors in assessing the complexity of
Web Browser
Client
BPE Container
User Workstation
HTTP Server with
WebSphere Plugin
Web Server
JMS Provider
Embedded HTTP
Server
Oracle Database Server
MQ
z/OS Host
WBI Server Foundation
SunOne Directory Server
Figure 2: The Physical Infrastructure in the To-Be
Architecture
Source: Infosys Research
Subsystem/Process
Node
Business Process Application
Web Application
Modernized CICS/Cobol Programs
Synchronous Interaction
Message Flow
Web Container
SOAP for CICS
CICS/Cobol
WebSphere MQ Server
1
2
3
1
2
3
14
FSA and SO Integration
Documented the learning both
technical and operational, so that it can
become helpful in planning for the future
initiatives
Dened a legacy modernization costing
framework based on the effort, numbers,
inuential factors and benchmarks, so
that future nancial decisions around
modernization can be driven in a
systematic way.
TECHNOLOGY SOLUTION
The solution essentially included three different
elements modernized CICS/COBOL programs
as the service provider, business process
application as the service broker and composer
and Web-based UI application as the service
consumer. The technologies used at each of
these elements are provided below.
The Service Provider: IBM Mainframe, Z/OS,
CICS, DB2, VSAM, COBOL, SOAP for CICS and
Web Services API from Google.
The Service Broker & Composer: Web
Sphere Business Integration Server
Foundation and MQ, J2EE, BPEL4WS and
Web Services.
The Service Consumer: The next generation
web dashboard for the back office operator
using J2EE, Struts and Web Services.
The Development Tools: Developed the
components using NetManage RUMBA,
WebSphere Studio (Integration and Enterprise
Editions) and WebSphere BI Modeler.
THE OUTCOME
Large modernization initiatives often aspire, but
struggle to dene an effective cost benet analysis
framework to provide visibility into the payoff
on the intended IT investment. Historically,
such upfront analysis activities often resulted
in avoiding IT disappointments [4]. Such payoff
analysis can help in making prioritized and
value driven decisions. This analysis need not
be a pure spreadsheet based one. Additionally,
by demonstrating the prototypes, explaining
the concepts and the benets, the business
community gets a better touch and feel, which
helps them to be more comfortable with and
convinced about the potential IT investment.
In this situation, the newly constructed,
intuitive and powerful user interfaces acted as
the primary vehicle that could provide the real
feel of the benet of the modernization. When
the real business information appeared on the
UI after a data fetch from the backend legacy
system, it provided a discussion framework to
generate interesting ideas and opportunities
that could make the business operations more
efcient and cost effective.
As an example, to demonstrate the
ability to seamlessly integrate with third
An elaborate cost-benefit analysis is typically a chimera for
most modernization initiatives, notwithstanding the fact
that such analyses help convince stakeholders
15
party systems, this implementation included
integration with Google search through their
Web Services APIs [5]. Through this integration,
the application was able to display Google
search results based on a specic customer name,
next to the customer business data retrieved
from the data store. In reality, for the actual
business operations, the above search results
might not have had any signicance. Still, this
demonstration opened doors for contextually
signicant discussions on potential integration
scenarios, with many internal and external
systems. For many stakeholders, especially
non-technologists, these discussions provided
a quick validation of the potential benets from
the architecture in consideration.
On the technical front, many of the
inuential factors identied in the planning stage
were analyzed such as code/asset reusability
and introduction, stateless nature of services,
workow separation, level of loose coupling,
service oriented integration and aggregation, and
usage of standardized web service technologies on
the legacy system. Out of this analysis, a learning
documentation was created with the intention of
ne tuning the modernization strategy.
This exercise also exposed technical
limitations on many of the elements on the
target technology platform. As an example,
Business Process Management capabilities
of IBM WBI Server Foundation (Version
5) had many limitations such as in creating
sub-processes, integrating and invoking web
services in the legacy platform and so on.
From that context, this exercise also acted
as a technical proof of concept, identifying
the platform limitations, validating the
product roadmaps from the vendor and more
importantly, identifying risk elements of the
future state architecture.
The experiences from the execution this
initiative provided signicant learning in the
program management front too. As an example,
to make the overall program a successful one,
many more internal and external organizations
participated than initially anticipated. According
to this learning, the team ne tuned the program
management practices that should be applied
for the overall modernization program.
Overall, this initiative demonstrated
the technical viability and the business benets,
such as agility and ability for integration. After
this, various stakeholders, including the business
team, became condent on the modernization
strategy. It provided them the condence to
move into the next set of activities to modernize
the rest of the applications, which constitutes
90% of the overall legacy system.
CONCLUSION
Historically, many organizations avoided or
delayed modernization decisions because
Service oriented integration offers a viable alternative for
organizations looking for a work around when faced with
multiple uncertainties while modernizing
16
of uncertainties around these initiatives. SO
Integration provides a low risk option in
modernizing systems relative to other options
such as rewriting to a newer platform. It helps
to leverage existing system investments while
concurrently deploying newer technologies.
SO Integration thus helps organizations
reconsider their abandoned or delayed
modernization plans.
REFERENCES
1. COBOL for the 21st Century by Nancy
Stern, Robert A. Stern, James P. Ley, John
Wiley & Sons, Aug 2005
2. Legacy Systems: Transformation
Strategies by William M. Ulrich, Prentice
Hall PTR, Jun 2002
3. Options Analysis for Reengineering
(OAR): A Method for Mining
Legacy Assets by John Bergey, Liam
OBrien, Dennis Smith, Software
Engineering Institute, Carnegie Mellon
University, http://www.sei.cmu.edu/
publications/documents/01.reports/
01tn013.html
4. Making Technology Investments
Protable: ROI Roadmap to Better
Business Cases by Jack M. Keen, Bonnie
Digrius, John Wiley & Sons Inc, Nov
2002
5. Google SOAP Search API (beta) http://
www.google.com/apis/
17
SOA and BPM:
Complementary Concepts for Agility
By Parameswaran Seshan
Enterprises can realize better success in aligning
IT systems with business by combining the power
of SOA and BPM
S
OA (Service Oriented Architecture) and BPM
(Business Process Management) are two
technologies that have been popular in the recent
times. Enterprises are looking to make their
business systems more exible by making use
of these technologies. This article looks at what
SOA and BPM bring to the enterprise architecture
and gives a practical perspective on the synergy
possible between them. It puts forward how
these two are really complementary technologies
and not conicting ones. Enterprises can derive
maximum benet in the alignment of business
and IT, by best leveraging their complementary
aspects and by coordinating the efforts in both
the areas.
WHAT THEY ARE
There is some confusion in the business and
technology world on the relationship between
SOA and BPM. Respective groups backing
these technologies appear to be putting forward
arguments and counter arguments that do not
seem to be helping the cause of companies that
want to make use of these technologies. Both,
BPM Systems (BPMS) and SOA, have been rising
in popularity in enterprise architectures with
SOA being pushed more by IT and BPM being
pushed more by business. Though both promise
exible and agile IT systems, the haziness over
their relationship is impacting their adoption in
enterprises.
SOA is an architectural pattern that
promotes design of software elements as
services that can be re-used and then combined
to create applications. Each service has a
well published interface and provides a well
dened functionality. The service interface is
abstracted out and separated from the design
and implementation of its functionality and the
invokers of the service are not exposed to the
service implementation.
SOA based architecture makes IT
systems agile by allowing applications to loosely
couple to each other through the service interfaces.
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
18
Composite applications can be built quickly,
to service business needs, by chaining together
such services and orchestrating their invocation
at run time. Such assembly of an application
from available services rather than writing code
is a value proposition that SOA offers. These
services become available to the entire enterprise
and some of them even to the external world to
customers, partners or suppliers. Services need
to be managed and supported to ensure their
adherence to SLAs (Service Level Agreements)
and evolution including version management
without impacting the clients.
BPM is the set of strategy, tools and
techniques to design, deploy and simulate
processes to enable rapid process-based
change in the organization aligned to meeting
organizational objectives. The key aspect of
BPM is to see business processes as core assets of
companies and managing those processes over
their life cycle. BPMS is the technology part of
BPM and is an architecture concept that takes a
process-centric view of applications to provide
exibility and business alignment to applications.
BPMS is a convergence of workow, EAI, process
modeling and business rules.
Each process is an ordered sequence
of activities performed by system or humans
to achieve a business objective. Processes are
dened by a business analyst in a graphical
modeler provided by BPMS. BPMS automates
the process by executing the activities in the
order specied, provides process performance
reporting, and allows the business analyst to
monitor the process real-time and analyze
the process and improve it. Since the process
denition drives the execution of the business
logic in BPMS, it makes processes exible. This
is because the formal process denition can be
changed to effect change in system behavior
with no or minimal changes in underlying
applications.
WHY THE DEBATE
Historically, business groups in organizations
had put in signicant efforts to re-engineer
business processes to improve and respond
to market demands. However their efforts
did not become successful since IT systems at
that time were not flexible enough to support
those changes. Business, then took to the
promise of the then new, ERP (Enterprise
Resource Planning) packages that offered
standardized business processes and canned
business functions. But they learnt while
implementing that it involved a huge effort
to tailor the packages to suit their specific
business needs, as systems were still rigid.
The perception that, IT systems are inflexible
and business cannot control these systems
only started growing and business got more
skeptical about IT. Recently BPM technologies
BPM views business processes as core assets of
organizations - this enables their effective design,
deployment and simulation
19
have emerged to provide the opportunity for
business groups to take more control of their
processes, and business groups have started
believing that they nally have a technology
addressed to them using which they themselves
can effect changes in the behavior of IT systems
by changing process models. At the same time
IT has been trying to make IT architecture
more exible. IT believes that SOA is a good
opportunity to deliver the promise of business
adaptable IT systems.
The community is tending to back
BPM and IT community tending to push SOA.
There is a view point difference, with the BPM
community viewing enterprise architecture
from top-down (business process perspective)
versus the SOA community viewing it from
Application
Platform
Business
Processes
task denition
task implementation
Business
Services
Integration
Services
Operational
Resources
IS IS IS IS IS IS IS
Mainframes
Servers
Servers
Mainframes
Data Data Data Data
Interfaces dened by enterprise context
Interfaces dened by enterprise semantics and requirements
Figure 1: Platform View
2006 AZORA Technologies, Inc.
Source: Adapted from BPM and SOA-where does one end
and the other begin; Mike Rosen, http://bptrends.com
bottom-up (services). So a gap appears to have
formed, with their concerns assuming different
meanings for terms like services, processes
etc. This is apparent from the way companies
like IBM has chosen to address both views in
the same WebSphere product by providing
Business process modeler and Integration
developer components.
SERVICES TO PROCESSES
BPM provides a high-level abstraction for
building IT systems namely, the process
layer. This is the layer at which the business
can effectively take part in development
and evolution of IT systems. Each activity
in a business process is expected to be a
componentized, reasonably high-granular
20
service which performs a logically complete
business operation. If the activity is a system
activity, then the BPMS expects SOA to
provide this service. Let us take for example
a sample process in a bank namely account
opening. Here deposit money into account is
a system activity and this activity is realized
by a service which updates the amount in the
system. Thus the activity needs to be linked
to this service in the process model and the
service is expected to perform the complete
business function expected of the activity
like in this case performing the required
validations like account status, applicability
of differential interest rates etc., and make
the account reflect the money deposited.
Such business services need to be exposed
to the process so that they can be invoked
with the process in BPMS becoming a client
to the services consuming the services, one
that orchestrates their invocation faithfully
in the order defined by the business analyst
[Fig 1]. SOA is the underlying model that
provides service to BPMS. On the other hand,
for a manual activity in the process, BPMS
expects the service to be performed by a user
belonging to the appropriate role and it routes
the work to the user.
At a process level, re-use of high-
granular services is possible. For example, the
deposit money into account service can be
reused in another process say, manage customer
account or funds transfer process. To realize such
services that are truly re-usable, BPM requires
the underpinnings of SOA. SOA can provide
BPM this re-usability framework so that activities
in a process can get reused easily in different
processes instead of them getting implemented
as functions bound to the specic context and
requirement of one process alone. SOA gives the
power to expose the process itself as a service
so that it becomes usable to other processes. In
a higher level process, it becomes an activity
that denotes a sub-process. In a B2B or a B2C
integration scenario, where this process needs to
integrate with the process of a partner, supplier
or customer, this process can be provided as an
external service which the external process will
invoke.
BPM visits the Enterprise Application
Integration (EAI) problem with a process-
based integration approach and makes the
process layer drive the integration rather than
traditional point-point or hub-spoke hard-
coupled integration approaches. SOA enables
this loose coupling with each application
exposing its core functions as services and
then the business process determining the
highly flexible order in which the services
need to be chained. And this is without the
application getting impacted with any change
in the invocation sequence.
Reusability in BPM needs to be built on a foundation of
SOA - as SOA is best positioned to provide a reusability
framework
21
Business
Process
Business
Service
Business
Service
Domain
Service
Domain
Service
External
Service
Integration
Service
Utility
Service
Figure 2: Services Hierarchy
2006 AZORA Technologies, Inc.
Source: Adapted from BPM and SOA-what kind of
services does a Business Process need, Mike Rosen,
http://bptrends.com
WHAT SOA ALSO NEEDS
In an enterprise, it is neither feasible nor cost-
effective to visualize and build all the services
that are ever required by applications. One of the
reasons is that the business requirements also
evolve and the functionality that services need
to support is usually determined by what the
business thinks the business process should do.
It is more feasible to let the business rst design
or re-design their business processes according
to business need and then determine the
services required for each activity in the process.
The services so identied would be re-usable
across various processes as business can better
visualize the various process contexts where the
same highly granular business functionality is
required.
With SOA, we can have a service-
hierarchy where we have various layers of
services with each layer re-using a lower layer
service (Figure 2). A lower level service will
typically be a wrapper service to service-enable
a legacy function, a re-usable infrastructure
service (i.e., one that provides an architecture
level function like logging service) or an
external service, or a data access service which
gets specic information from the database.
One example is Get Customer Details given
a customer id as input. The business services
need to be composed from low granular services
and are the ones that are mapped to activities
in process. Hence their identication and
interface contract design would be inuenced by
business processes.
Though SOAs core focus is overall life
cycle management of services, it needs a macro
context to be present that knows the way these
services are to be chained together to deliver
value. This context, SOA assumes, would be
specied by business analysts or managers who
are expected to do this with the formal approach
and tools provided to them directly by BPM, and
at run time processes executed in BPMS provide
this context. The process maintains the state
22
for the interactions spanning across multiple
services. Services primarily exist to serve a
business need and their true value is realized only
when some client orchestrates their invocations
to create value. This client would ideally be the
process in BPMS; without the context provided
by processes, their existence would not have
much meaning.
SOA would need help of BPM to give
business more clarity, visibility, exibility and
control of the process ow plus control ow
in a composite application. The choreography
in a composite application may involve some
human actions where a human role performs a
function, say approval and system actions. This
choreography, being buried in the execution
piece of the application remains invisible and
outside-of-control of the business analyst unless
it is modeled at the business process level
abstraction explicitly by the business analyst in
BPMS.

THE IMPERATIVE
Both SOA and BPMS are emerging
technologies and standards in both the areas
are still firming up. Clearly SOA and BPM
enable each other and mandate existence of
each other for value realization. This calls for
enterprises to approach their implementations
in a coordinated fashion rather than with
independent efforts.
Firstly, the approach should follow
top-down modeling of the process architecture
of the enterprise (AS-IS). Process analysts can
look at the value chains in the organization and
model the core and support processes for that
in BPMS starting from level 0, iteratively to
lower layers (level 1, level 2 etc.) of the process
hierarchy. After analysis, for BPMS and SOA
implementation the analyst can pick up one or
two lower layer processes from this that are of
high impact, say for example, account opening.
This way with cost savings in initial investment,
immediate benets could be gained.
This can be extended further on to
other processes incrementally, applying the
learning from the pilot experiences. Now before
identifying the services to be enabled, instead
of just implementing the existing process AS-IS,
the analyst needs to re-design the process for
optimization (model the TO-BE) as otherwise,
the SOA effort would not be effective. SOA
cannot make any positive difference when
implemented on a awed as-is process and
this is not the problem of SOA or BPM per se.
In the TO-BE process, identify service required
for business functionality of each activity in the
process, keeping in mind the potential for re-use
of the service across processes.
Business services thus identied and
to be implemented would, only be in a small
number when compared to what we would
SOA and BPM are truly complementary - they go hand in
hand to impart businesses with more clarity, visibility and
flexibility coupled with greater control of the process flow
23
end up having if we identify the services from
a bottom-up fashion. Now IT needs to design
and implement these business services on SOA.
Hence there is a cost advantage here as opposed
to building full set of services from bottom-up and
then ending up not using a good number of them.
IT should design the business service
in such a way that it is composed from lower
level services including wrapper services that
service-enable legacy functionality to leverage
existing applications wherever possible. To
enable maximum re-use across enterprise, a
single point of truth for business information
and uniform semantics across the enterprise
must be maintained for data-access services. For
example, customer prole information must be
available through a single service that accesses
it from one data source and it should mean
the same customer as viewed in the enterprise,
through out.
SLAs for the service also must be
dened and the services must be managed by a
core services group to manage service evolution,
version management and SLA assurance. A
repertoire of re-usable business services can
be built by adding such services incrementally
based on implementation of new or re-designed
business processes. So the upfront investment
and risk for SOA implementation is low.
Coming to execution, the process
visually modeled in BPMS is made executable
by the BPMS by converting it into an executable-
process denition based on a standard like WS-
BPEL. WS-BPEL, which chains together each
service involved in the process ow in the order
mandated by the visual model, is executed by
BPMS at run time. For runtime, the binding of the
concrete service end points to the abstract service
interfaces specied in the process denition is
taken care of by the BPMS using capabilities in
the BPMS to discover, negotiate and select the
right service provider or using the infrastructure
of ESB (Enterprise Service Bus).
PROCESS IMPROVEMENT
The process execution performance is monitored
by process manager through BAM (Business
Activity Monitoring) console provided by
BPMS which provides the metrics related to the
process. The process analyst can do analysis and
make optimizations in the process by changing
the process model. Among other things, the
analyst looks at the process cycle time, service
level SLAs etc. As part of optimization, analyst
may change the order of invocation of services,
drop some services and introduce new activities
in the process. The underlying executable WS-
BPEL process denition automatically reects
this for run time. In the BPMS process modeler, a
repository of existing business services could be
made available while modeling process so that
they can be picked and chosen. SOA thus helps
in reconguring processes and with BPM it can
improve business-IT alignment.
CONCLUSION
SOA and BPM are two technologies showing
the promise of making business systems exible
and agile. There seems to be some disconnect
between the views of groups advocating these.
Holistically, to provide value, the power of
process logic separation provided by BPM should
be combined together with the re-usable service
foundation that SOA delivers (which BPM can
leverage to deliver effective and valuable business
processes). The key is to make SOA deliver
services that are aligned to the needs of business
and exible enough to support process changes
in BPM resulting from evolving business needs.
Re-design of process should precede design of
business services to ensure SOA effort does not
get wasted in supporting wrong processes.
24
REFERENCES
1. BPM and SOA - What Kind of Service
Does a Business Process Need?, Mike
Rosen, http://bptrends.com, July 2006
2. BPMS Watch: BPM and SOA: One
Technology, Two Communities,
ht t p: //www. bpmi ns t i t ut e . o r g,
September 2005
3. BPM and SOA - Where Does One End and
the Other Begin?, Mike Rosen, http://
bptrends.com, January 2006
4. The OMG and Service Oriented
Architecture, http://www.omg.org/
attachments/pdf/OMG-and-the-SOA.
pdf, accessed August 2006
5. The assimilation of BPM, http: //
www. i nf ormat i on- age. com/home,
April 2005
25
Towards Enterprise Agility
Through Effective Decision Making
Innovative use of Web services and shared data
services in an architecture framework makes for
effective decision making
By Sriram Anand and Jai Ganesh
O
rganizations operating in fast-paced
business environments need to respond
to fast moving windows of opportunities,
c hal l enges and gr owt h pos s i bi l i t i es .
There are short lead times for extracting and
presenting Key Performance Indicators (KPIs)
and shorter lead times for decision making.
These business needs are not in sync with the
technological challenge such as the presence
of disparate enterprise systems (for example,
ERP, SCM, CRM and so on). This situation gets
complicated with the increasing number of
mergers and acquisitions resulting in different
business units within an enterprise having
their own data warehouses. Adding to this
is the increasing number of users inside and
outside the enterprise who need real time access
to information.
In such a scenario, an Enterprise Digital
Dashboard (EDD) would improve the lead time
in and quality of decision making by extracting
and generating KPIs from enterprise software
systems. The EDD is an effective tool for
executives to get a top-level view of their
enterprise as well as their closely linked
partners. Decision makers require easy
access to data such as total sales per month,
inventory status and a number of other KPIs. The
EDD offers an enterprise decision maker a single
view of the metrics being monitored in a user-
friendly manner.
The EDD is, in many ways, similar to
an automotive dashboard, which provides the
driver with a single view of the state of the
automobile. Development of an EDD involves
extracting and generating metrics, indicators,
sales gures, increasing production of certain
product lines, identifying lead times, inventory
levels and other key organizational data
from the enterprise software systems. This
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
26
information resides within applications,
dat abases and processes and must be
extracted and analyzed to determine its
impact on a business.
There are two pri mary areas that
represent issues in the rollout of successful
EDDs: retrieval of pertinent data from a
multitude of data sources and interacting with
business systems that may be developed using
heterogeneous technologies.
How can enterprises enhance their
agility through effective decision-making is a
question that currently demands attention.
EDD: VALUE AND CHALLENGES
The EDD, by providing a single view of the
metrics that are being monitored, considerably
improves the efficiency and effectiveness of
decision-making.
An EDD-enabled enterprise is typically
charact eri zed by f ast er and i mproved
decision-making capabilities owing to the
availability of real-time access to information,
KPI of the organizations business units and
access to KPIs of its key partners. In addition,
access to notications and alerts about critical
events facilitate timely and relevant action.
An EDD also helps improve agility
by enabling enterprises leverage the available
data to make informed decisions and deliver
the right information to the right people at the
right time.
Though benets abound, implementing
EDD is strewn with challenges. Some of them
are:
Disparate data types
Disparate technologies
Disparate locations
Disparate ownerships
Disparate data
This is put upfront to convey all the key
challenges to the reader. So the reader will keep
this in mind while going through the architectural
challenges.
EDDs are difcult to implement owing
to several complexities involved in combining
and calculating data from disparate enterprise
systems such as SAP and i2. In addition, one
has to overcome other associated problems
such as duplication of data that necessitate
data synchronization. Multiple systems in
different lines of business access data in different
ways based on their requirements and specic
technologies. There are no consistent practices or
techniques in place for the access and update of
data. Apart from this, the same data element may
be stored in and accessed from multiple formats
prevalent in different databases.
An e n t e r p r i s e d a s h b o a r d i s
fundamentally a representation of core business
data that is available in multiple formats. The data
elements may be rolled up to different levels to
suit the requirements of the audience in question.
Therefore, one of the fundamental issues involved
in the development of a dashboard is to obtain
quality data from a variety of sources without
difculties.
Large enterprise IT groups typically
have a diversified portfolio of systems that
has grown over time and caters to a wide
collection of consumer applications and end
users. Though unst ruct ured growt h of
applications leads to multiple monolithic
applications that efficiently deliver core
functionality, they fail to exist in harmony
with other applications. Enterprise dashboards
therefore must communicate with multiple
business applications to display the correct
information and apply correct rules in order to
display certain data.
27
ARCHITECTURAL CHALLENGES
Critical data scattered across geographies is
a barrier for effective decision making [Fig
1]. The diversity of the data sources involved
makes management of the data sources and the
centralization of common data across lines of
business and geographies very important for
fullling the business requirements. Following
are some of the specic issues and challenges in
this regard:
1. View of data: The scatter of critical data
across multiple databases in various
business and geographies is a serious
issue obstructing a clear view of the data.
For example, customer data if so scattered
impedes identication of key customer
attributes. Independent, unmanaged
changes to systems and databases leads
to confusion that in turn results in higher
effort and cost to obtain consolidated
customer information [1].
2. Systems logic: A wide spread of systems
and databases may affect the performance
of business applications owing to complex
business logic. The applications are
burdened with embedded logic associated
with multiple data elements residing in
different databases. Chaos may be the rule
in the absence of a clear mapping between
business processes and the data elements
that are needed to fulll those processes.
3. Integration costs: Technology and
toolkits available for data access have
matured signicantly whether they are
in the J2EE space or the Microsoft space.
Depending on the manner in which
D
a
s
h
b
o
a
r
d
Business
Apps-USA
SCM-Europe
ERP - Asia
Business
Apps-South
America
D
a
t
a

A
c
c
e
s
s

T
i
e
r
Legacy
Mainframe-
USA
Legacy-
Europe
SCM-Europe
ERP- Asia
SCM - Asia
CRM - Asia
Oracle-
South America
RDBMS-USA
Data
Synchronization
Legend
Applications in Europe
Applications in Asia
Applications in the U.S.A
Applications in South America
Data common across business lines/geographies
Figure 1: Current State of Systems and Databases Source: Infosys Research
28
specic technologies have evolved in a
given Line of Business (LOB), the data
access techniques may vary between
applications and can result in higher costs
incurred for integration across LOBs.
4. Channels of update: The core business
data in a specific database may be
updated by various means: direct,
indirect or by other channels. Data can
either be updated directly through the
business applications dedicated for that
LOB or indirectly upgraded through
synchronization with databases of other
LOBs. It can also be updated through
other channels such as IVR or by a
customer service representative over
telephone or manually.
Multiple channels of data
updates can have a bearing on user
experience. For example, a user may request
for a change in personal information over
the phone or IVR and attempt to look up
the same data online. Latencies in data
update may result in incorrect data being
displayed.
5. Resource contention: Heavy loads on
databases are common during a bulk
update. For example, there may be
certain periods of time when customer
enrollment through the submission of
paper forms for some specic nancial
offerings may increase. Such a bulk update
might invlove measures to lock the
database front and consequently users
who are making changes to the online
application are likely to experience
inconsistencies.
6. Latencies in data retrieval: The spread
of critical data across multiple databases
and the associated redundancy may
cause additional latencies for certain applications.
This can happen due to the bulk update
scenario illustrated above. Such a scenario
may also arise due to overheads associated
with data synchronization. Alternatively, this
may happen when common, redundant data
is retrieved from a database associated with a
certain line of database that also retains line of
business data.
Considering the above challenges
to effective implementation of EDD, Web
services are an effective technology to power
EDDs. Web services, by being loosely coupled
and interoperable, enable easy extraction of
appropriate data from enterprise systems
and facilitate executive decision-making.
Web services enable EDD to process enterprise-
wide data to arrive at an appropriate metrics
t hat can t hen be di spl ayed as a port al
screen. A high degree of personalization can
al so be made possi bl e dependi ng upon
the type of metrics chosen by the decision
maker. The result is a shorter lead-time for
decision-making as timely information is made
available with access to data in multiple
sources.
ILLUSTRATIVE BUSINESS SCENARIO
The automotive industry is in the middle of
adversities: global over-capacity, decreasing prices
and margins, consolidation and so on. Access
to critical information and effective decision-
making therefore are of prime importance.
Large automotive companies operate across
geographies.
Consider the case of a multi-billion dollar
automotive company with business units spread
all over the world. Massive consolidation in the
automotive industry and within the company has
resulted in a wide variety of disparate IT assets
29
residing in different geographies and supporting
different business models.
Seamless sharing of information across
all operations of the automotive company,
however, is necessary to leverage the benets of
consolidation such as cost reduction and effective
information access. Appropriate alignment
between geographically dispersed business
units and functional groups is required to create
a unified view of sales, dealers, consumers,
products, and services. Currently, each business
unit, functional group, and brand of the company
operates through independent systems, programs
and so on. As a result, there is limited synergy
across the organization, leading to inefciencies
and lack of coordination.
The automotive company is structured
across geographies as strategic business units
(SBUs). The SBUs are a mix of both legacy as well
as modern systems (for example, ERP, SCM and
CRM). The systems of the company across SBUs
are as follows:
1. Nor t h Amer i ca [ Legacy syst ems
(Mainframes)]
2. Europe [Legacy + SCM]
3. Asia [ERP + SCM + CRM]
4. South America [Java based + Oracle
RDBMS]
The company wants to offer its CXOs a top-
level view of performance of its SBUs. The KPIs
required are the following:
Revenues by SBU
1. Drill down (break-up by product
categories)
2. Drill down (break-up by brands)
Revenues target vs actual by SBU
1. Drill down (break-up by product
categories)
2. Drill down (break-up by cars)
Revenue forecast by SBU
1. Drill down (break-up by product
categories)
2. Drill down (break-up by cars)
Sales volumes by SBU
1. Drill down (break-up by product
categories)
2. Drill down (break-up by cars)
Production volumes by SBU
1. Drill down (break-up by
manufacturing plants)
2. Drill down (break-up by cars)
Top dealers by SBU
1. Drill down (break-up by product
categories)
2. Drill down (break-up by cars)
Prot margins by SBU
1. Drill down (break-up by product
categories)
The above scenario illustrates the key
decision making metrices required by the CEO
and the disparate nature of the systems in place
within the organization.
A Web services based EDD architecture
is proposed to provide a clear top-level view to
the executive decision makers of the company.
Proposed Architectural Solution
The key architectural requirements for the
proposed framework include: ability to integrate
data sources in a loosely coupled manner,
ability to provide a unied view of information
persistent across varied data sources, use design
patterns wherever appropriate, buy instead of
build use proven commercial/open source
products/frameworks instead of building from
scratch [Fig 2].
The architecture consists of two main tiers:
1. The Data Access tier
2. The Enterprise Dashboard tier:
(The EDD provides an aggregated view of
the enterprise information collected from
30
varied data sources that are geographically
dispersed).
Data Access Tier: The core Enterprise
Information Integration (EII) functionality is
implemented by the data access tier. The data
access tier will focus on the various issues
and challenges discussed in detail above.
While most conventional data integration
solutions result in a number of touch points
between the business logic and the integration
logic, Web services provide a loosely coupled
and extensible solution wherein different types
of data sources can be integrated with the EDD
without requiring many changes to the existing
functionalities. The solution to the problem of
the automotive company lies in providing a
consolidated view of data across the enterprise
by eliminating multiple data update channels
and by providing a single suite of applications
to access/update data [2]. Implementing this
suite of applications as services would eliminate
any lock-in on protocols and integrate easily in
a heterogeneous environment. Specic aspects
of the new architecture to address these issues
would be as follows:
Provide a composite view of data
tailored to business processes and usage
patterns [3]
Develop shared data services that can
retrieve information for a set of related
applications. Each client application
uses a subset of the classes managed by
the data service, but the data service
manages the relationships between the
classes and ensures that each application
is aware of all data changes, regardless of
the source of change
Design service contracts based on the needs
of individual LOBs/client applications
Provide information on demand (in
response to service requests) by optimizing
performance and caching heavily accessed
data
Enforce data security by authenticating
invocation clients and protecting against
unauthorized data access
Eliminate back door updates by
providing the service layer that guarantees
data consistency across all systems.
Enterprise Dashboard Tier: The Enterprise
Dashboard provides a user interface and takes
care of other non-functional tasks such as security,
internationalization, scalability, availability and
caching. This allows incremental adoption of the
Web services strategy, which is a big draw for
organizations that have a huge IT infrastructure
to migrate.
The specic characteristics of the overall
architecture are as follows:
Web services created for the purpose
of providing functionality in a loosely
coupled, implementation independent
manner. These implementations will
leverage the core business logic in the
existing applications
A centralized database that maintains
data common to multiple lines of business
and geographies. This will require
rationalization and consensus building
of the data models that satisfy current
and anticipated needs. This database
will become the system of record and the
owner for the common data
A set of services that manage data access/
update for all databases. These services
will manage all data access/update and
will become the de facto data access
layer for all applications. Transaction
management for complex transactions
that span multiple databases will not be
performed at the service layer. The services
31
will provide access to multiple data
sources; it will be the responsibility of the
data consumers to manage heterogeneous
transactions [4].
LOB data continues to reside in existing
databases. Depending on the specic line
of business and the problems (or lack
thereof) associated with the business
data, this data will continue to remain in
the existing databases. Physically, the data
may be migrated to one common platform
if it makes sense from a strategic vendor
management or licensing perspective.
However, the important point to note
is that data will be segregated on the basis
of usage by lines of business with the shared
data services layer simply providing a
uniform mechanism of accessing the data.
Infrastructure functionality consolidated
and implemented as a Web service to
allow additional consumers to leverage the
same rules and components. An important
benet is in the area of security, specically
authentication and access control.
Scalability and availability of various
applications handled in a streamlined
manner by usi ng Web ser vi ces.
Implementing strict service contracts
with specic levels of service decouples
the service consumers from actual service
implementations. This enables the service
implementer to have the flexibility of
D
a
s
h
b
o
a
r
d
Business
Apps-USA
SCM - Europe
ERP - Asia
Business Apps-
South America
D
a
t
a

W
e
b

S
e
r
v
i
c
e
s
ERP- Asia
SCM - Asia
CRM - Asia
Oracle
South America
Legend
Applications in Europe
Applications in Asia
Applications in the U.S.A
Applications in South America
ERP- Asia
SCM - Europe
Legacy
Mainframe
USA
RDBMS - USA
Functionality
based
Web Services
ERP Web
Services
SCM Web
Services
WSRP
Compliant
Portlets
Centralized
Common Data
Figure 2: Web Services based EDD Architecture Source: Infosys Research
32
deploying service implementations in the
most optimal manner. Apart from this,
Web service management tools may be
used for service lifecycle management.
Benets of Web Services
It is important to analyze the actual benefit
gained due to the migration to Web services
that help realize an EDD. This analysis will help
in identication of the factors that may be used
for the calculation of Return On Investments
(ROI) [5]. The various factors associated with
the technical and business viewpoints are as
illustrated below:
Cost reduction: In the existing setup, as
individual LOBs manage various aspects,
including infrastructure and common data, each
of them incur these costs in addition to costs
incurred to address customer experience issues
and troubleshoot problems.
With the centralization of common
functionality and data, and the elimination
of multiple redundant data sources and
channels of update, a signicant portion of this
cost may be eliminated. There may be higher
upfront costs involved in implementing the
Web services, but the long-term costs related
to recurring yearly expenses should be lower at
least as far as the common data, processes, and
functionality are concerned.
Flexible business applications: Web services with
a robust service contract and SLAs will allow new
applications to be built faster and cheaper. This will
pave the way for modications to business process
as well as the ability to meet IT challenges of
new business requirements. Essentially, this will
result in one less problem to solve for the CIO.
The streamlined availability of data through the
shared data services will ease the implementation
of new business applications. Integration
across LOBs will be significantly easier as
common data will be shared and accessed in a
standard fashion.
Transparency for LOBs: In the existing model,
applications had to be designed to handle
issues with respect to heterogeneous systems,
data redundancy and data synchronization.
Business processes also had to be designed
to handle anomalies in synchronization.
The implementation of Web services will
completely insulate LOB applications from
redundancies in data storage and issues with data
synchronization.
Unified Patterns: As discussed earlier, the
dependency on proprietary techniques and
frameworks used for component development in
various LOBs can be eliminated by implementing
Web services. For example, in the case of data
services, this will provide unified data usage
patterns across applications/LOBs thereby
enabling developers to talk a common language.
The redesign and implementation of shared data
services results in the data being available in
granularity that is based on real business needs.
Consumer applications will not have to retrieve
data in raw format and assemble pieces to create
front-end views.
CONCLUSION
Gaining competitive advantage in a global
business environment means that the managers
shoul d have easy access to i nformati on
that facilitates informed decision-making.
The technology adoption trend is shifting to a
scenario where the decision makers should be
able to access KPIs of the enterprise using various
devices, accessing KPIs of business units stributed
across the globe.
33
A Web-services-based EDD solution has
tremendous potential to solve this business
problem by enabling information aggregation
from multiple disparate systems spread across
the enterprise. Web services-based EDD
costs less as it leverages the existing legacy
investments. The reference architecture leverages
Web services and handles all the functionalities
as desired for an EDD. The proposed architecture
is based on live experience in designing a Web
services based SOA. The architecture described
above provides the core service-orientation
to the IT architecture of an automotive firm
and can be considered as a baseline architecture
f or al l EDD e nt e r pr i s e e nabl e me nt .
The archi tecture enabl es a true fl exi bl e
aggr egat i on of i nf or mat i on f r om t he
internal systems of the organization using
Web services, and offers a single point of
contact for decision making information.
Fur t her , t hi s enabl es exi s t i ng l egacy
applications to take part in the overall business
architecture.
Integration solutions from EAI vendors
involve large initial investments. Moreover the EAI
products are not very exible, do not fully support
incremental investments and are proprietary.
Moreover it is not easy to work around with EAI
systems while integrating with IT systems of
different partners with heterogeneous systems.
The above probl ems need to be
addressed while implementing a Web services
based EDD solution. This is because the solution
should be able to interact with various systems,
should be exible, should support incremental
investments and should be able to interact with
the systems of the rms partners. Web services
address most of these. Web services support the
data coming from various disparate systems to
have a single view of the data. The following are
some of the benets that may be obtained in a
retail scenario by implementing an EDD using
Web services:
Web services based EDD allows the
decision makers to have a single unied
view of data thereby enhancing the quality
of decision making
Web services enable aggregation of data
from the disparate systems of the rm
thereby facilitating easy data extraction
from various databases, applications
and processes. This would empower the
decision maker to make decisions based on
the sales trends and customer preferences
in various channels and forecast future
sales across various channels and also have
better inventory planning.
REFERENCES
1. Data Services for Next-Generation SOAs,
Christopher Keene, Web Services Journal,
December 2004
A Web Services-based EDD solution, when implemented
effectively, enabled information aggregation from multiple
disparate systems and helps decision makers access key
performance indices
34
2. How Do You SOA Enable Your Data
Assets? Jim Green, DM Direct Newsletter,
October 15, 2004
3. 12 Steps to Implementing A Service
Ori ent ed Archi t ect ure, Davi d S.
Linthicum, White paper Grand Central
Communications, October 2004
4. Architecting Data Assets, Jeff Schulman,
Gartner Research, August 2004
5. Te a mi ng Up Por t a l s a nd We b
Services, A. Parikh, R. Pradhan and
N. Shah, Java Pro Magazine, May 2004
6. Single Sign-on Simplicity with SAML,
Jon Byous. Available at http://java.sun.
com/features/2002/05/ single-signon.
html
35
Managing Risks in IT
Initiatives: A CXO Guide
Unforeseen risks can turn out to be a significant
impediment to realizing the business value of IT
By Dayasindhu Nagarajan, Sriram Padmanabhan and Jamuna Ravi
J
ohn Calvin, the CIO of one of the worlds
biggest investment banks, was unable to
concentrate on his golf and this had nothing
to do with the New England autumn. His
golng partner and colleague, Thomas Hobbes,
VP Customer Service (North America), was
rambling on about how the new billing system
was a white elephant that his department did
not want to use it. John was intrigued. The new
million dollar billing system used state-of-the-
art technology, was three times quicker than the
old system and had been developed with inputs
from users in customer service division. In fact,
this was the most successful IT project that the
bank had executed in terms of cost, quality and
on time completion. The IT services vendor was
the best the bank had ever worked with and had
been given the mandate for rolling out the new
billing system in its European operations.
The game was over sooner than usual.
Even as John found himself trying to relax, the
billing system still played in his head. Each
passing moment seemed to bring the same
questions: What had gone wrong with the
perfect billing system project?
CURRENT APROACH TO RISK
MANAGEMENT
The quandary that John Calvin nds himself in
is not isolated. According to a Standish Group
report, in 1995 only 16.2 per cent of software
projects on an average were completed in time
and on budget [1]. A later study by the Standish
Group in 2006 shows that the situation is getting
better and that on an average 35 per cent of the
software projects were completed in time and on
budget [2].
The question is whether the 35 per cent
average is good enough for critical projects and
whether we can improve this average. In our
experience, a large number of these failures are
a result of inadequately managing risks that are
external to the IT project but have an impact on
the project and outcome.
Risks can be thought of as negative
outcomes during or on completion of an IT
project. The size of risk is the loss that is incurred
if that negative outcome occurs. Therefore, risk
management can be conceptualized as both a
strategy and a process to mitigate risks in the
lifecycle of an IT project.
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
36
IT project risks have been identied for
about three decades now and most enterprises
have been pract i ci ng ri sk management
techniques to mitigate them successfully. The
responsibility of managing risk is with the
proj ect manager, who usually follows a
structured approach, involving the identication
of the risks that impede project implementation,
assessing the impact of each risk and developing
corresponding risk mitigation plans. A list
of the important risks in this category would
include resources shortfall, unrealistic schedules
and budgets, continuing stream of requirement
changes, shortfalls in externally furnished
components, real-time performance shortfalls
and straining computer science capabilities [3].
Many of the mitigation plans for project
risks are usually short-term in nature. A typical
example of an IT project risk is that of new
developers not producing software in concordance
with standards. Risk management involves
enforcing a standardized variable denition that
is maintained in a central repository. This makes
it easier for testing, debugging and for a new
programmer to get a quick grasp of the system.
Enterprises tend to treat risk management
as a mechanism to set on course IT projects that
have gone awry resulting in many IT projects
falling short of promised quality, cost and
schedules. Large program initiatives typically
include, in addition to the project, a set of resource
suppliers and a set of business benet recipients
after deployment. The success of such programs is
measured in terms of those business benets that
have accrued to the enterprise that are attributable
to this project. An understanding of such external
program risks is crucial to understanding risk
management. Risk management as on date
seldom addresses such external program risks.
Our hypothesis is that external program risks
should be addressed in a risk management
framework to enhance the success rate of critical
projects.
We believe that the proposed rethink on
managing risks especially program risks will
considerably help in improving the success and
acceptance of IT projects.
AN INTEGRATED APPROACH TO RISK
MANAGEMENT
A new paradigm on risk management is required
to overcome the limitation of a risk management
approach t hat f ocuses onl y on proj ect
implementation risks. The limitations have
been exposed by unforeseen risks derailing IT
projects. Project managers usually do not have the
visibility or control over these external program
risks. A Delphi exercise based on existing research
in this domain with ve project managers with
seven or more years of experience in managing
IT initiatives revealed some important program
risks as seen in Figure 1 [4].
Even though these external program risks
are usually beyond the direct responsibility of
the Project Manager, it is imperative that the
Project Manager is aware of them at the beginning
of the project. Mitigation plans for such risks are
typically much more complex as they cannot be
managed in isolation by the Project Manager.
These risks need to be managed with the active
support of the other stakeholders in the project
including the sponsor and the end user. For
exampl e, an i nvest ment bank may be
implementing an IT project to consolidate certain
accounting reports. Before the completion of this
project, the project manager becomes aware of a
statutory requirement to comply with the
Sarbanes-Oxley Act, which is far more stringent
in consolidation requirements than the scope of
the project. This statutory change in the business
environment needs to be foreseen even before the
accounting consolidation project commences.
37
Wherever the project scope is well dened
and there are few external dependencies, even
isolated risk management techniques can sufce.
Isolated risk management techniques that work
at the project level have been in existence for a
few decades and they are well understood and
practiced, especially by the IT services vendors
engaging in outsourcing and operating at high
process maturity levels.
The challenge today lies in successfully
managing programs that involve a high number
of external dependencies. These typically involve
strategic initiatives for achieving key business
objectives and require a large number of interfaces
and stakeholders. It is in such programs that
isolated risk management techniques fail. The
relationship between risk management and the
external dependencies are depicted in Figure 2.
Making the set of program risks visible
at project management level implies that the
CXOs need to take an active role in ensuring that
the project manager is empowered to be a part
of discussions on aligning IT projects with the
business strategy at the program level. This is
likely to drive both effectiveness and efciency of
the project manager to deliver an IT solution that is
acceptable to all stakeholders. A risk management
strategy that follows from the business needs
would ensure a smoother transition of the new
IT system to users. As part of the risk mitigation
Business environment
1. Unstable corporate environment
2. Change in ownership or senior management
3. Discontinuity in the industry of sudden changes in
statutory environment
4. Lack of top management commitment to project
5. Articial deadlines driven by statutory or
external event
Business-IT alignment
1. Project not based on sound business plan
2. Unforeseen nancial constraints
3. Fluidity of program
User acceptance
1.Failure to identify all stakeholders
2. Failure to get buy in from all stakeholders
3. Failure to gain user commitment
4. Conict between user departments
5. New and/or unfamiliar subject matter for users
Organizational politics
1.Pre-emption of project by higher priority project
2. All or nothing projects
Change management
1.Not managing change in organization
structure properly
2. Not managing change in organization
processes properly
3. Inadequate program management structure
Forecasting and estimation
1.Underestimating resource estimates
2. Lack of visibility of growth plans or
non-functional parameters
Outsourcing
1.Friction in transferring knowledge about
the program
2. Mismanaging the transition from in-house
to outsource
Absence of project risk management strategy
Figure 1: Elements of External Program Risks
Source: Infosys Research
Figure 2: Different Approaches to Risk Management
Source: Infosys Research
Optimal
Sub-Optimal
Supra-optimal
Optimal
E
x
t
e
r
n
a
l

D
e
p
e
n
d
e
n
c
i
e
s
L
o
w

(
P
r
o
j
e
c
t
)
H
i
g
h

(
P
r
o
j
e
c
t
)
Risk Management
Isolated Integrated
38
strategy, interface points between different IT
systems and appropriate governance models need
to be identied.
The Integrated Risk Management process
identies the IT program risks rst. This step
describes the risks and gives the context in
which they are likely to occur. An analysis of the
program risks further segregates risks as those
pertaining to the project and pertaining to those
external to the project. The external risks can be
managed with inputs from CXOs who are more
aware of the larger organizational dynamics
while the Project Manager, as usual, manages
project risks. A key feature of this process is that
the impact of program risks needs to be assessed
at the project level since they have a direct impact
on project success. A typical process that is part
of the risk management strategy used to mitigate
both program and project risks is depicted
in Figure 3.
The subsequent sections in the paper
show how the proposed risk management
strategy and process have worked in real life
situations and the lessons learned from these
experiences.
CASE STUDIES THAT VALIDATE THE NEW
APPROACH
Case 1: Financial Risks
The investment banking arm of a large Wall
Street bank embarked on an ambitious program
to convert its entire suite of legacy applications
to a new Web-based architecture. After a lengthy
vendor selection process, the company undertook
a detailed requirement analysis exercise and
followed it up with a design and prototyping
stage. Midway through this stage, after investing
close to US $ 500,000 the company put a sudden
end to the program as budgets had dried out,
and the new economic climate did not allow for
the increased budgets. In this case, the Project
Manager had not taken into account a key
nancial risk, leading to wasteful expenditure
for the bank.

Case 2: Internal Silo And Political Risks
A leading bank with a worldwide presence
discovered that its software inventory included
three applications that performed the same
business function (trade settlement), each
with some degree of localization to cater
Plan for
Program
Risks
External
Risks
Project
Risks
Certainties
CXO
inputs
PM analyzes
impact,
probability
Mitigate
Escalate
Ignore?
Figure 3: An Integrated Risk Management Process Source: Infosys Research
39
respectively to the American, European and
Asia Pacic stock markets. Each location had its
own IT establishment, linked closely with the
corresponding business unit. The rm launched a
major code convergence program, but recognized
at an early stage the urgent necessity of aligning
the IT teams as a pre-requisite program success.
Accordingly, the company brought
the maintenance and support of all settlement
applications worldwide under the ownership
of a single vendor, who then assisted them in
drawing up and implementing a road map to
convergence.
The key risk in this program was that
of disparate groups with individual political
agendas ultimately leading to confusion and
failure. Identifying this early and tackling it
resulted in the success of the program.
Case 3: Stakeholder Risks
A Fortune 500 company outsourced the design
and development of a Web portal that was to
service both internal and external users. The
company identied the stakeholders: business
users from the operations division, report
recipients from the marketing department, owners
of various applications that were required to feed
data into the portal, software analysts, product
vendors, software development consultants,
database administrators and the quality assurance
department.
After the detailed requirements elicitation
process, the consultants submitted a design that
involved the intranet and extranet applications
residing on the same physical machines. All
identied stakeholders found the design elegant
and approved it. A few weeks before the
actual deployment of the application, the
network security administrators were shown
the deployment view and they raised a valid
concern from the point of view of the companys
data security policies. This concern scuttled the
proposed architecture, forcing a major re-design
exercise that caused the project to miss schedules
and greatly overshoot budgets.
By not undertaking a risk evaluation
exercise at the commencement of the project,
the Project Manager had failed to identify a key
stakeholder, resulting in losses, in spite of having
sought and secured buy-in from the rest of the
stake-holders.
Case 4: User Acceptance Risks
A large financial conglomerate with multiple
product lines had unsuccessfully attempted a
customer integration hub project twice in the
past. When the company decided to take it up
for the third time, the project manager assigned
decided to begin the project by studying the key
reasons for the previous failures, and thereby
enumerating the risks for the current attempt.
The primary risk that emerged from this exercise,
surprisingly, was that of user acceptance: however
rich in functionality the application was, business
users preferred to revert to the legacy applications
that they had grown familiar with. Armed with
this insight, the project manager invested energy
and money on hiring user interface modelers and
planned the prototyping phase in great detail,
insisting on involving certain inuential users
in the approval process for the user interface
design. At one stroke he had converted his biggest
prospective critics into the most vociferous
champions for the application. Six months later,
the customer integration hub went live on budget
and schedule. The users gave rave reviews to the
customer integration hub project.
Once again, an integrated risk evaluation
exercise was instrumental in uncovering a
hidden risk that might have led to the failure
of the program. Instead of focusing merely on
technical project risks, the project manger took an
40
integrated approach linking the user perspective
challenges, which made the difference between
success and failure.
Case 5: Global And Local Risks
In the same program as discussed in Case 4, one
risk that was recognised extremely early was that
of the high degree of external dependency. The
program manager was an American, while the
application was to be rolled out on a worldwide
scale. It was a logistical nightmare to be able
to engage business users from other regions to
dene functional requirements that were tailored
to local situations, and IT department who could
detail out the interfacing requirements with their
respective local applications.
When the product manager found that
he was unable to bring all stakeholders onto a
common risk management agenda, he quickly
scaled down the scope of work to US operations
as a pilot exercise. Once the application was
rolled out successfully in the US region to
appreciation from the business user community,
it became much simpler for the Project Manager
to propagate global cognizance of the necessity to
manage the program in an integrated manner.
The risk evaluation exercise allowed the
project manger to discover that due to various
reasons, he was unable to assess and mitigate
risks on a worldwide scale, mainly on account
of reasons of organizational structure that were
beyond his control. Consequently, the only
recourse available to him was to reduce the scope
of the program to a manageable level, which
would enable maximizing chances of success of
the project despite the isolated risk mitigation
strategy.
Case 6: System Risks
Continuing with the same program discussed
in Cases 4 and 5, the customer hub application
needed to interface with twenty different upstream
applications. One of the pre-requisites to the
System Test phase in the development lifecycle
was the need for sample feeds in the correct
format, from each of the interfacing applications.
Non-availability of these test feeds on time was
recognized as a risk to completing the project on
schedule. When the project manager convinced the
IT teams that owned the interfacing applications
to integrate their project schedules and risks, he
discovered that two of the applications could
not meet the deadlines imposed by his schedule,
for valid reasons. He therefore revisited his own
schedule and reprioritized the testing of these
two feeds, thus ensuring that the overall schedule
was still met.
When the proj ect manager took an
integrated view of the program risks, what
he and initially identified as a project risk
became known to be a certainty instead of
a risk. He then had to merely plan for it.
CXOs should empower project managers to work like
entrepreneurs and take a holistic perspective of risk and
its management at each phase of the program, from
conceptualization to execution
41
The three cases 4, 5 and 6 pertaining to
this program illustrate the value of integrating
ones risk management strategy across a variety
of parameters viz.,
Across business/IT stakeholders
Across geographies/business divisions
Across application development teams.
Case 7: Vendor Risks
A telecommunications and entertainment
major was unhappy with the amount it was
spending on the maintenance and support of
its vast inventory of applications, many of
which had been outsourced to a large local
vendor. The company took the decision to
change vendors and choose an offshore vendor,
to reduce costs while maintaining quality.
Their rst attempt to do this involved transferring
control of a few applications, in a piece-meal
manner. The offshore vendor ran into problems
straightaway, thanks to the non-cooperativeness
of the existing vendor in providing knowledge,
data and documentation essential towards
understanding the applications. The venture
proved a failure because each engagement
had been conceived as an individual project.
The risks in each of these projects were related,
and they had to be tackled at a relationship level,
taking the entire outsourcing deal as a strategic
program.
The CIO recognized this, and shortly
afterwards, launched a Strategic Outsourcing
Program, driving it from the top this time,
in an integrated manner, identifying goals
and targets for each track of the program
that could be rolled up to the program level,
in consonance with the chosen offshore vendor.
INFERENCES FROM CASE STUDIES
In cases 1 and 3, isolated risk management is
the main reason for project failure. The other
case studies illustrate how integrated risk
management contributed effectively to the success
of the project. Cases 2, 4, 5 and 6 illustrate how
the project manager has effectively employed
an integrated risk management strategy across
different stakeholders, geographies/business
divisions and application development teams.
These case studies lead us to conclude that
an effective risk management process would be
one where risks are evaluated at a program level,
rather than the project level. Also, external risks
need to be analyzed with inputs from the CXO
and an integrated risk management strategy
needs to be put in place that mitigates external and
internal project risks and plans for certainties.
It is imperative that CXOs empower the
project managers to work like entrepreneurs
and take a holistic perspective of risk and its
management at each phase of the program, from
conceptualization to execution. This ensures that
the project managers are able to take appropriate
risk mitigation steps that result in successful IT
initiatives.
Our experience leads us to believe
that investing in project mangers with the
entrepreneurial spirit is a true challenge for
CXOs. We also believe that this is not easily
achieved by training interventions. Successful
CXOs have created this entrepreneurial
mindset by employing other means, such as
mentorship programs, job rotations across
divisions, and most importantly, by shaping
organizational culture. These enterprises align
rewards and recognition to demonstration of
entrepreneurial traits, as well as provide an
environment that is conducive for project mangers
to demonstrate them.
CONCLUSION
Risk management is one of the most challenging
areas of IT project management. This is mainly
42
because of the emerging new operating models
and changing business environment that have a
signicant impact on risk management strategies.
CXOs should share their knowledge of program
risks with the project mangers. This ensures that
the project mangers leverage the Integrated Risk
Management process better and take appropriate
risk mitigation steps that result in successful IT
initiatives.
REFERENCES
1. Chaos, The Standish Group Report, 1995.
Also Available at http://www4.in.tum.
de/lehre/vorlesungen/vse WS2004/1995_
Standish_Chaos.pdf
2. Standish Group Report: Theres Less
Development Chaos Today, March 1,
2007. Also available at http://www.
sdtimes.com/article/story-20070301-
01.html
3. Managing Risk: Methods for Software
Development, E.M. Hall, Addison-
Wesley (SEI Series), 1997
4. Strategies for Heading Off Project
Failure, P. Cule, R. Schmidt, K. Lyytinen
and M. Keil, Information Systems
Management , Spr i ng 2000, pp.
65-73.
43
Measuring the Value of IT in an
Organizational Value Chain
The business value of IT transcends
organizational boundaries in a network economy
and therefore needs to be measured
from a fresh perspective
By Jai Ganesh and Akash Saurav Das
T
he business value of Information Technology
(IT) is conventionally measured from
three perspectives: productivity, business
protability, and consumer surplus. According
to the productivity theory, if IT investments are
productive, then more output is realized for a
given quantity of input, leading to increased
value. The business protability or competitive
advantage theory propounds that if a firm is
able to use IT to create and retain value, then
IT investments can lead to increased business
protability. The consumer surplus approach
measures the business value of IT based on the
benets that the consumers accrue.
Research on measuring the business value
of IT is mostly focused at the rm and process levels
[1, 2]. Present day IT implementations, however,
involve a set up of multiple rms that are in turn
parts of value chains of different organizations.
This brings to the fore various inter-organizational
issues that have not been addressed so far and
that need to be considered while analyzing
as well as measuring the business value of IT.
The question, How does one measure the
business value of IT in an organizational value
chain or network? needs to be answered to
measure the business value that an IT investment
brings to all stakeholders in an inter-organizational
network.
BUSINESS VALUE OF IT IN A VALUE CHAIN/
NETWORK
The widespread use of the internet to transfer
information has resulted in the electronic
communication effect, electronic brokerage
effect, and the electronic integration effect [3].
The electronic communication effect essentially
means that IT may allow more information to
be communicated in the same amount of time
or the same amount of information in less time
and thus decrease the cost of communication
dramatically. The electronic brokerage effect
means that electronic markets, by electronically
connecting many different buyers and suppliers
through a central database, can fulll this same
function.
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
44
Electronic communication effect comes into
play when both the supplier and the buyer use
IT to create joint, inter-organizational processes
at the interface between value-added stages.
This effect occurs when IT is used not just to
increase the speed of communication, but also
to facilitate tighter coupling of the processes that
create and use the information. The results of the
electronic integration effect are savings in time
and reduction in date entry errors.
El ectroni c i nterconnecti ons provi de
substantial benets. The recipients of these benets,
buyers or suppliers (or both) should be willing to
pay, either directly or indirectly, for these benets.
While devising methodologies for
measuring the business value of IT in a network,
electronic communication and electronic
integration effects deserve deeper study. The
electronic integration effect gives rise to two
key components: Network externality and
Inter-Organizational Information Systems
(IOIS). Network externality is based on the
network theory and is present when the
value created for customers increases with the
size of the customer base [4, 5]. Positive
network externalities are usually considered
positive consumption externalities where the
utility that a user derives from consumption
of the goods increases with the number of
other agents consuming the goods [4]. IOIS
facilitates technology-based cooperation across
organizations [6, 7]. Organizations are migrating
towards increasingly collaborative economic
relationships [8, 9]. This can be seen in the
form of supply chain integration supported
by IOIS and private marketplaces. IOIS also
supports collaborative applications such as
interactive product improvement, forecasting,
and inventory control, in addition to offering
improved integration, collaboration, and access
to business intelligence.
PROPOSED METHODOLOGY TO MEASURE
BUSINESS VALUE OF IT
The existing methodologies to measure the
business value of IT are based on productivity,
competitive advantage, or business protability,
and nally on consumer surplus theories that offer
empirically sound answers when the business
value of IT is measured from the perspective of
an individual rm. They may not, however, be the
best approaches to measure the business value of
IT in an inter-organizational network. This could
be due to the following reasons:
The cost and benets of IT investments
may be shared by multiple rms
The value of IT derived from each
part ner s cont ri but i oni s rel at i ve,
not absolute
There is a lack of network-level methods
to measure the business value of IT
Or gani zat i onal boundar i e s ar e
increasingly blurring
The success if IOIS may be intangible,
hence difcult to measure
IT assets may be co-developed by
multiple organizations
Degrees of IT asset utilization may vary
from one organization to the other, thus
directly affecting the value created.
The need therefore is for a network-level
analysis, or a network theory to measure the
business value of IT that addresses, among other
things, the complexities discussed above.
Strategi c organi zati onal networks
may assume the forms of strategic alliances,
joint ventures, and long-term buyers-supplier
partnerships [10].
Clearly, a network-level analysis holds
the key to understanding the business value of
IT in todays context because of the increasing
importance given to networks of firms and
45
their partners. The following model enables
organizations to capture the business value of IT
in a network:
BVITN = F (CBenet, CoBenet, IBenet, CCost,
CoCost, ICost)
Where BVITN = Business Value of IT in a value
chain/network
CBenet = Coordination benets
CoBenet = Communication benets
IBenet = Integration benets
CCost = Coordination costs
CoCost = Communication costs
ICost = Integration costs
ILLUSTRATIVE SCENARIO
A cl oser exami nati on of a busi ness-to-
business (B2B) e-marketplace scenario best
illustrates the network theory. The e-marketplace
origin can be traced back to Electronic Data
Interchange (EDI), which operates on the premise
of l ow cost s of i nt er- company t rade.
Researchers have offered empirical support for
a positive association between integration of
EDI with internal systems and performance
outcomes [11]. The above argument can
be extended to Internet-based transactions
as well.
E-marketplaces facilitate efcient searches
and transactions through services such as buyer/
supplier and product/services repositories and
transactions such as procurement and asset disposal.
In addition to their market making function, the e-
marketplaces also offer integration services such as
supply chain and ERP integration. The advantages
of e-marketplaces include reduced buyer and
seller search costs, increased personalization and
customization, and price discovery, among various
others. The most important characteristics of
e-marketplaces are facilitating inter-organizational
processes and leveraging network externalities.
Integration of e-marketplace products
and services with the customers involves
modeling, automating, and integrating business
processes and trading relationships. Integration
of information refers to sharing of information
among different stakeholders of the e-marketplace
such as buyers and suppliers. This includes data:
inventory, demand, capacity plans, production
schedules, promotion plans, shipment schedules,
joint design and execution plans for product
Proposed Model E-Marketplace Model
Business Value of IT in a
Value chain/network (BVITN)
E-marketplace protability, critical mass of transactions
Coordination benets (CBenet) Improved coordination through enhanced price discovery
Communication benets (CoBenet) Improved communication through efcient search and partner identication
Integration benets (IBenet) Benets derived from Integration of e-marketplace products and services with
those of its partners such as integrated business processes and trading
relationships
Coordination costs (CCost) Price discovery costs arising from participation in the e-marketplace
Communication costs (CoCost) Costs incurred to conduct search and partner identication
Integration costs (ICost) Costs involved in partner integration
Table 1: E-Marketplace Model Source: Infosys Research
46
introduction, demand forecasting, replenishment,
and workflow coordination that refers to
streamlined and automated workow activities
between various supply chain partners.
The proposed methodology has been
applied to e-marketplaces to illustrate its
applicability in a networked scenario (Table 1).
CONCLUSION
From a managerial perspective, it is important
to understand how investment in IT affects
the bottom line. While different measurement
approaches may result in different measured
value of IT, the network theory is an attempt to
incorporate the complexities faced by todays
organizations. The variables, CBenet, CoBenet,
IBenefit, CCost, CoCost and ICost, would
possess varying levels of strengths in impacting
BVITN. In certain all-encompassing scenarios,
all the variables act as signicant determinants.
This may not be the case with EDI investments
where certain variables (CoBenefit, IBenefit,
CoCost and ICost) may play a more important
role in inuencing BVITN than the others. The
methodology proposed by the network theory,
therefore, is designed to be exible to lend itself
to different degrees of the organizational value
chain and to capture the complexities involved in
IT investments in organizational value chains.
REFERENCES
1. Information Technology and Business
Value: An Analytic and Empirical
Investigation, A Barua, C. Kriebel
and T. Mukhopadhyay, Information
Systems Research, 1995, 6:1, pp. 3-23
2. Information Technology and Internal
Firm Organization: An Exploratory
Analysis, L. Hitt and E. Brynjolfsson,
Journal of Management Information
Systems, 1997, 14(2), pp. 81-101
3. Electronic Markets and Electronic
Hierarchies, T. W.Malone, J. Yates,
R. I. Benjamin, Communications of the
ACM, 1987, 30:6, pp. 484-497
4. Systems Competition and Network
Effects, M. L. Katz and C Shapiro,
The Journal of Economic Perspectives,
1994, 2, pp. 93-115
5. Information Rules: A Strategic Guide
to the Network Economy, C. Shapiro and
H. R. Varian, Harvard Business School
Press: Boston, 1999
6. Informati on Li nks and El ectroni c
Ma r k e t p l a c e s : T h e R o l e o f
Interorganizational Systems in Vertical
Markets, J Y Bakos , Journal of Management
Information Systems, 1991, 8:2, pp. 31-52
7. Strategic Control in the Extended
Enterpri se, B. R. Konsynski , IBM
Systems Journal, 1993, 32:1, pp. 111-142
8. Compet i t i on and Cooperat i on i n
I nf ormat i on Syst ems I nnovat i on,
E. Clemons and M. Knez, Information and
Management, 1988, 14:1, pp. 25-35
9. Information Technology and Industrial
Cooperation: The Changing Economics
of Coordination and Ownership, E. Clemons
and M. Row, Journal of Management
Information Systems, 1992, 9:2, pp. 9-28
10. Strategic networks, R. Gulati, N. Nohria,
A. Zaheer, Strategic Management Journal,
2000, 21:3, pp. 203-215
11. Integration in Electronic Exchange
Environments, G. E. Truman, Journal
of Management Information Systems,
2000, 17:1, pp. 209-245
12. Uses and Consequences of Electronic
Markets: An Empirical Investigation
in the Aircraft Parts Industry, V.
Choudhuray , K. S. Hartzel, B.R. Konsynski,
MIS Quarterly, 1998, 22:4, pp. 471-508.
47
Can IT Investments
be Optimized in the Insurance
Industry through Portfolio
Prioritization Approach?
By Sanjay Mohan and Siva Nandiwada
Insurers are continuously looking to optimize
IT investments to deliver business value and
stay ahead of competition
Optimizing IT Investments has been a topic
of discussion for over a decade across all
industries. This topic is becoming increasingly
important in the insurance industry due to
legacy systems still being the industrys
technology foundation and also due to lack of
maturity of package applications in insurance
industry. The insurance systems are often aging,
inexible, defy architectural guidelines and very
inefcient in meeting changing business needs.
As per Tower groups analysis over
90% of IT budget is spent on infrastructure,
major/minor enhancements and application
support leaving very little outlay for planning
and executing new application development
projects [1]. As new applications developed
get moved to the maintenance mode, cost of
managing existing applications (enhancements
& support) increase over time.
This calls for a stronger need for
innovative options of controlling the costs of
maintenance and support and building long
lasting applications that are agile and need low
maintenance costs. In the absence of such strong
mechanisms, a meager growth of 3.1% in IT
spend year on year would not help the cause
of making IT investments towards growth,
innovation, better customer/channel service
and regulatory compliance.
Figure1: Growing Operational Costs and Impact on
Discretionary Spend
Source: Infosys Research
Growing Operational costs putting
pressure on Discretionary Spend
Discretionary Spend Non-Discretionary Spend
IT Budget
IT Budgets
growing at 3.1%
Time
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
48
As depicted Fig. 1 operational costs
eat up the discretionary spend over a period of
time and do not give insurers enough levers for
growth and innovation.
In this context following are the three
key challenges faced by the CxOs today, around
corporate portfolio prioritization.
Managing Costs
Several key projects are falling short of
target for scope, schedule, quality and cost
causing signicant conict for resources.
Large uctuations in budget due to changing
environment and priorities is affecting the
overall IT portfolio of investments.
Managing Benets
Many times there exists a signicant gap between
benets claimed and benets realized. There is a
lack of governance on the IT investment portfolio
coupled with lack of a measurement mindset.
This often tends to result in either the absence
of business cases or overstatement of business
benets and understatement of costs.
Business-IT Alignment
Lack of understanding of long term business
and IT vision and associated needs compounded
by technology changes, package vendor
dependence, regulatory compliance has not
helped towards stronger business-IT alignment
in insurance industry. This resulted in creating
multiple systems for different products with
no regard for integrated view of customer data.
This also resulted in longer gestation periods
(over 12 to 18 months) to introduce new products
and inexibility to meet the distribution needs in
a timely manner.
These challenges raise several
questions on the overall process of identication,
conceptualization, prioritization/funding and
delivery of IT initiatives, tracking and reporting
of the business benets against these initiatives.
WHAT ARE INSURERS DOING TODAY?
Majority of the insurers today prioritize projects
based on bottom-up approach driven by cost-
benet methodologies with little alignment to
overall business objectives. These processes
ignore key factors such as organizational risk,
future vision, exibility and overall focus on IT
asset management.
Many times these investments are not
stopped at the right stage in spite of projects not
meeting the intermediate milestones. This could
be because of lack of well dened gating criteria
and governance for careful evaluation of projects
on an on-going basis.
Often it is felt that implementation of a
robust tool would be the silver bullet to address
these challenges, when lack of awareness and
knowledge among senior management of
the need for a robust portfolio management
processes is the issue to be addressed.
HOW DO WE KNOW IF WE HAVE
A PROBLEM?
If insurers dont track the following metrics or they
show signs of yellow or red, then it is time
to re-look at the current portfolio management
processes to see if they are optimized
Lack of visibility on the status of the key
projects at the CIO level
Benets (PBP, ROI) realized vs projected
% of IT spend on discretionary projects
% of projects delivered on time and
budget
% of projects executed in line with the
organizational strategy/goals.
% of projects having technology solution
in line with the enterprise architecture
49
% of un-utilized IT resources
% of projects submitted vs approved
Number of IT projects and Average spend
per project.
A careful evaluation of the root causes for the
above metrics can be attributed directly or
indirectly to lack of competencies in the people
and well-dened processes and systems around
portfolio prioritization processes.
In the recent past, organizations
have increasingly sensed the need for a
more comprehensive, top-down and
integrated approach to IT initiatives. While
there could be several alternative ways to
address the problem, we suggest a portfolio
approach to IT investments that best
addresses the issue.
PORTFOLIO APPROACH TO IT
GOVERNANCE AND BEST PRACTICES
We suggest a portfolio approach to IT governance
consisting of clearly dened principles of IT
governance, a well dened stage-gate process
and a prioritization process that is linked to
returns on the investment. This approach needs
to be tightly linked with the existing project
management processes under strong change
management process to ensure effectiveness of
the implementation.
There are broadly ve areas to focus
for improvement in the prioritization process
[Fig. 2].

Principles of Portfolio Management
One of the key success factors for ensuring a
successful portfolio management process is to
dene the governing principles. These would be
the overarching principles that can provide overall
guidance for the portfolio management process
derived from the overall organizational objectives.
These are very crucial considering the
amount of subjectivity involved in decision
making process while choosing the right
investment priorities. Also guiding principles
help in laying down the foundational logic for
governance.
Some of the examples of the principles,
amongst others, include that:
All projects would need to adhere to the
overall corporate prioritization process
and any exceptions would be approved
only by the CEO
All funding decisions would be done
based on objective criteria supported by
data
Financial hurdles will be consistently
applied across all projects to ensure
optimal gating
Business department head be the owner/
sponsor for each project envisaged.

Portfolio Governance Process
Portfolio governance process needs to be derived
from the overall organizational objectives.
Broadly the objectives of a majority of the
insurers would be around:
Figure 2: Portfolio Approach to IT Governance
Source: Infosys Research
Principles of Portfolio Management
Stage-Gate
Process
Linkage to Project
Management
Processes
Corporate
Portfolio
Prioritization
Process
Phased
Approach to
Prioritization
Portfolio Governance Process
50
1. Operational cost reduction through
productivity improvements and
innovation.
2. Growing the business through new
products, services, cross-sell, new markets
and innovation.
3. Excellence in customer and distribution
service.
4. Regulatory compliance and managing risks.
Based on these identied objectives,
organizations would need to identify the
portfolio of investments for each of the above
categories. Alternative models of investment
portfolio of investments are show in Figure 3.
Insurers must note that it is important
to dene each of the portfolios with clear
examples of projects from the organizations
perspective. Insurers need to identify the
funding guidelines for each of the portfolios
and ensure that governance mechanism for the
prioritization is clearly articulated. There are
four key considerations for the governance of
such process.
1. One of the executive ofcers to be owner
of the governance mechanism.
2. Investment-mix for portfolios is in-line
with the overall corporate objectives
3. Attempting to change the investment-mix
over a period of time to the ideal goals.
As an example if the costs of running the
business are around 80% of the IT budget,
when and how can we reduce that to 60
or 70%?
4. Well-dened process for review,
monitoring and control of the projects
through the process.
Stage Gate Process
One of the key pre-requisites for strong
governance is a well-dened stage gate process.
This ensures that key projects that need to
move through the entire life cycle from ideation
to implementation based on the key criteria
identied would go through as per plan.
Similarly ideas that need to be stopped during
the cycle due to non-viability would be blocked
or controlled from further investments.
Insurers would need to choose the right
number and type of stage gates. Example of the
stage gates include:
Idea denition / Discovery Completion
Idea elaboration / Business Case
Justication Completion
Solution Denition Completion
Solution Implementation
Post-implementation Support
Completion.
Each of the stage gates would need to have
well dened objectives, owner, inputs, outputs,
process, tools /systems and measurement
criteria/metrics. Detailed denition of
each of the identied stages would help in
Comply
Transform
Grow
Productivity
Operate
Model 3
Figure 3: Investment Models
Source: Infosys Research
Alternative Models for Investment Portfolios
Innovate
Grow
Run
Comply
Transform
Grow
Run
Model 1 Model 2
A
v
a
i
l
a
b
l
e

I
T

B
u
d
g
e
t

51
implementation of the stage-gate process as well
as periodic review and control as a part of the
ongoing governance.
Linkage to Project Management Process
There is a need to link the overall portfolio
management processes with the project
management processes such as project planning,
review, tracking and control. Many of the inputs/
outputs from the project planning processes
would need to feed into the overall portfolio
management processes.
As an example, the outputs of the
solution denition phase (deliverables, key
metrics, etc.,) under the project management
process would act as the key inputs for the
stage gate in the portfolio management process.
Similarly, any systems used for capturing the
data for the project management processes
would need to be well integrated with the overall
systems/processes for the portfolio
management.
It is critical to ensure that the overall
principles, stage gate process, governance
mechanisms and linkages with existing project
management processes would need to be
socialized with the key stakeholders in the
organization. Considering the impact this
would have on the enterprise-wide project
management processes and overall culture of the
organization, right change management practices
are to be followed to ensure the success of this
initiative.
Phased Approach to Prioritization
Since this could be a large change management
exercise depending on the maturity of the
organizations, it would be prudent to follow a
phased approach to the prioritization process.
The prioritization process could be applied
initially for the ideation phase, followed by
denition and implementation. Alternatively,
the prioritization could be applied in a few
business units before implementing across the
organization.
As is evident, identification of the right
tools/systems is only a part of the solution.
There is a need to define the overall
framework and strong governance and
change management processes are to be put in
place to develop and execute a robust portfolio
prioritization process. Organizations need to
develop processes and people competencies
as a pre-requisite to designing a system/tool
for implementation.

CONCLUSION
In summary, the portfolio approach suggests the
need to dene key principles, create stage-gates,
ensure strong governance through executive
sponsorship, link to the existing project
management processes and adopt a phased
approach to implementation. Overall change
needs to be well managed with buy-in from all
key stakeholders.
Since the implementation of this change,
in itself, could be a signicantly large project, the
overall benets of implementing such a change
could be the pilot for the implementation of
some of the processes that would be laid out in
the overall portfolio management processes.
Key benets from the implementation of
this approach would include reduced variability
and improved predictability in the costs and
benets of projects at the corporate level that
would be in-line with the overall objectives of
the organization.
REFERENCES
1. US Insurance IT Spending: Technology
to Enable Business Success, Tower Group
Research, March 2005
52
2. U.S. IT spending and Stafng Survey,
Barbara Gomolski, Gartner Research,
November 2005
3. IT Spending lags behind Revenue Growth
in Most Industries, Barbara Gomolski
and Jed Rubin, Gartner Research, August
2006
4. US L/H Insurance IT Spending, 2005
2010, Celent, February 2005
5. Business Needs Challenge Insurance
Legacy Systems, Gartner Research, April
2003
6. Drivers and Trends For IT in Insurance,
Forrester Research, June 2005.
53
Co-Creating Experiences of
Value with Customers
Building Experience Co-Creation Platforms
enhances value creation and fosters innovation
By Venkat Ramaswamy
I
n an era of ubiquitous connectivity, active,
informed, networked, and empowered
customers are challenging the traditional rm-
centric process of value creation that has served
companies so well over the past hundred years.
Prahalad and Ramaswamy (2004) argue that we
are on the cusp of a profound shift in the way
value is created from a rm-centric process to
a co-creation process jointly by the customer and
the company. In a co-creation process, customer
value emerges from the space of interactions
between the customer and the company, and the
quality of customer experiences associated with
those interactions and their outcomes outcomes
that simultaneously generate business value to the
companies that facilitate them effectively.
The traditional view of value creation
and its dominant logic are shown in Figure 1
overleaf.
This dominant logic of the modern
corporation that has served us so well over the
past hundred years is now being challenged
as never before. For starters, individuals today
are far more informed through the Internet and
connected with each other than ever before.
The mere fact that there are over 2 billion
cellphones, not to mention countless ways
of voi ci ng and sel f-expressi on, and
communicating with other consumers (e.g., e-
mail, instant messaging, SMS, Blogs with RSS,
etc.), enable individuals to learn about others
experiences with products and services much
more rapidly. Firms were assumed to have more
information and knowledge than individuals.
Individuals now have a new found freedom
that liberates them from being targets to be
had by institutions. As individuals increasingly
have the motivation and the means to take
more control of their experiences, companies
can no longer act autonomously, designing
products, developing production processes,
crafting marketing messages, and controlling
sales channels with little or no interference from
consumers. Customers now seek to exercise their
inuence in every part of the business system.
For instance, there are currently over 60 million
blogs, over one-third of which were created in
the rst quarter of 2005. Further, over half of
SETLabs Briefings
VOL 5 NO 4
Oct - Dec 2007
54
the bloggers are adults. Blogs fundamentally
challenge the nature of relationship between
the institution (company) and the individual
(customer). Armed with new tools and dissatised
with available choices, individuals (customers)
want to interact with rms, as well as (customer)
communities to co-create value. The use of
interaction as a basis for value creation is at the
crux of the emerging opportunity space.
THE NEW OPPORTUNITY SPACE:
ENABLING EXPERIENCE ENVIRONMENTS
Consider the case of Apple. Through its iTunes
service, customers have downloaded over three
billion songs of their choice, after listening
to a sample of the song, thereby escaping the
tyranny of the CD (although individuals still
can download an entire CD if they so wish).
Individuals including musicians can also
publish and share their own playlists using the
iMix tool that Apple provides. The community
of music lovers is engaged in helping individuals
discover new music. The downloaded music
can be transferred to the Apple iPod over 5
million iPods were sold in the rst quarter of
2005 alone. The success of the iPod lies in the fact
that Apple understood and focused on the user
experience that results from user interaction
with the iPod. Working backwards from the
user experience, Apple designed the interface
right down to the scroll wheel to facilitate a
compelling experience environment of accessing
and listening to music. Besides the physically
cool design, this meant paying a lot of attention
to the underlying software according to Steve
Jobs the software is the user experience. In
other words, the software enables individuals
to scroll through their library quickly, construct
playlists on the go, or simply play user-dened
playlists. Moreover, at the Apple stores (which as
of early 2005 had over 50 million customer visits;
over $1 billion in retail sales; and over $40 million
in prot), individuals can learn about how to
actually get music into the iPod (from their own
music collections), download songs from iTunes,
and get answers to simple questions such as
how do I get music into my iPod? The Apple
store is a learning environment, besides being
a sales environment. Individuals interactions
with knowledgeable people at the genius bar
(including consumers from Apple user groups)
supports the consumers learning experience.
Thus, as shown in Figure 2, value to
me as an individual consumer lies in the human
experience associated with outcomes of processes
of interactions with Apples iPod, the process
of downloading songs from iTunes, and/or
Traditional View of Value
Value = Fn (Products & Services)
Suppliers The Firm
Marketing
Channels
Passive
Customers
CRM ERP SCM
Firm-centric products and services
have been the basis for value creation
The Dominant Logic of Value Creation
1. Value creation is associated with products and services
2. Firms have more information and knowledge than
customers
3. The rm unilaterally denes and creates value for the
customer through its activities
4. Value is exchanged with customers who represent
passive demand for the rms offerings
Figure 1: Traditional View of Value Creation
Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
55
interacting with Apple employees as well as
the user community. As every experience is
contextual, the challenge for companies like Apple
is to build the capabilities that allow individual
customers to interact with customer communities,
as well as an extended network of companies,
to co-create their own experiences of value.
At the heart of the new value creation system,
are experience environments that go beyond
product and service offerings.
An experience environment (which can
be anywhere in the business system) is centered
on individual-centric interactions with the
companys products, processes, and people, as
well as customer communities. These interactions
are where the customer experience is, and
what the company must deeply understand to
design the technology platforms and information
infrastructure to support compelling experience
environments. This is a profound transformation.
It challenges the very heart of managerial
capitalism as we have known it over the past
100 years: that the role of the rm is to organize
its activities around creating goods and services
for customers. No longer can any company
unilaterally create value for the customer through
its marketed offerings. Rather, companies must
learn to engage customers in a process of co-
creating value, based on the customers view of
value and not the companys.
Consi der a pati ent wi th a cardi ac
pacemaker that monitors and manages his heart
rhythm and performance, a valuable benet.
However, this patients comfort level would
increase substantially if someone or something
monitored his heart remotely, and alerted
him and his doctor simultaneously of any
deviations from a predetermined bandwidth
of performance, relevant to his condition.
Doctor and patient together could decide on
the remedial response. This scenario gets more
complicated when the patient travels far from
home. A mere alert will not sufce. The patient
needs directions to the best nearby hospital,
Value = Co-Created Experience of Interactions and Outcomes
Experience Co-Creation
EMPLOYEES
Nodal
Firm
Nodal
Firm
Nodal
Firm
Nodal
Firm
PRODUCTS CUSTOMERS
Experience
Environment
PROCESSES
Customer Communities:
Access to Competence
Consumer
Communities
Network of Firms:
Access to Competence
Figure 2: New Value Creation System Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
56
and the attending physician needs access to
the patients medical history. How do the
two doctorshis primary care provider back
home and the physician on call at the out-of-
town hospitalcoordinate their diagnosis and
treatment? Should he call his spouse? How can
he recognize and assess the risks and develop
an approach to compliance and cooperation
with these medical professionals? The doctors,
the facilities and services, and the pacemaker
are all part of a network centered on the
patient and his well-being, as shown in Figure
3 above. Companies are already installing
elements of these network capabilities. Consider
Medtronic, Inc., a world leader in cardiac
rhythm management that seeks to offer lifelong
solutions for patients with chronic heart disease.
It has developed a system of virtual ofce
visits that enables physicians to check patients
implanted cardiac devices via the Internet. With
the Medtronic CareLink Monitor, the patient
can collect data by holding a small antenna over
his implanted device. The data is captured by
the antenna, downloaded by the monitor, and
transmitted by a standard telephone line to the
Medtronic CareLink Network. On a dedicated
secure Web site, physicians can review patient
data and patients can check on their own
conditionsbut no one elsesand grant access
to family members or other caregivers.
Medtronics CareLink system goes
beyond the cardiac device itself and unleashes
opportunities for an expanding range of value
creation activities. For example, each persons
heart responds to stimulation slightly differently,
and the response can change over time. In the
future, doctors will be able to respond to such
changes by adjusting the patients pacemaker
remotely. Furthermore, Medtronics technology
platform can support a wide range of devices
and remote monitoring/diagnostic systems,
potentially used for monitoring blood sugar
readings, brain activity, blood pressure, and
other important physiological measures.
I believe that this pacemaker story is
a prototype of the emerging process of value
creation. The patients interactions with the
doctor, the family, and the ER at the out-of-town
hospital affect the quality of the patients overall
experience. The meaning of value itself changes
from the physical product, the pacemaker, to the
co-creation experience of a specic patient. This
co-creation experience originates in the patients
interactions. And the quality of this experience
determines the value that is created. The goal
then is to build the underlying technology
platforms and the infrastructure to support a
portfolio of experience environments that arise
from different experience scenarios as shown
in Figure 4 overleaf.

The value of patient outcomes differs
according to each scenario (not to mention by
patient as well). Consider the context of a crisis
out of town. It is important for Medtronic to
understand deeply the nature of the interaction
events at a granular level. Each experience
scenario in effect can activate multiple experience
environments. The challenge is to identify a core
Primary
doctor
Medical
specialists
Scan and
diagnostics
clinic
Doctor on call at
out-of-town hospital
Emergency
services
Pacemaker
manufacturer
Patients with
similar condition
Patient
Figure 3: Doctor-Patient Network
Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
57
set of experience environments as shown below,
and build technology platforms and event-centric
information infrastructures to support a portfolio
of experience environments.
Companies must learn to build platforms
that enable experience co-creation processes
across the portfolio of experience environments
(i.e., experience co-creation platforms). In this
view, IT platforms become a strategic enabler
of experience co-creation processes. However,
this view forces companies not to be focused
on product specifications and features that
hardwire the process of customer experience.
(Product quality is still critically important. It
is a necessary but not a sufcient condition for
co-creating experiences). Rather, experience
co-creation platforms are about creating the
preconditions to accommodate a wide variation
in individualized experiences that are contextual
and granular. A much deeper understanding of
the experience scenarios of consumers is called
for. This depth of understanding is impossible
without active customer involvement and
dialogue. Second, the experience environments
are not just about hardware; the exibility for a
variety of experiences derived from sophisticated
use of software. A deep and imaginative
understanding of the enabling technologies is a
1

1
2
3
4
2
3
4
1
2 3
1
2
Patient Experience Scenarios
Im going on a trip, but Id like to know
whom to turn to once Im there Crisis out of town
Pacemaker
Primary
doctor
Medical
specialists
Scan and
diagnostics
clinic
Doctor on call at
out-of-town hospital
Emergency
services
Patients with
similar condition
Patient
Primary
doctor
Medical
specialists
Scan and
diagnostics
clinic
Doctor on call at
out-of-town hospital
Emergency
services
Patients with
similar condition
Patient
Primary
doctor
Medical
specialists
Scan and
diagnostics
clinic
Doctor on call at
out-of-town hospital
Emergency
services
Patients with
similar condition
Patient
Primary
doctor
Medical
specialists
Scan and
diagnostics
clinic
Doctor on call at
out-of-town hospital
Emergency
services
Patients with
similar condition
Patient
Im a little uneasy and would like to be
checked out remotely by my primary doctor
What do you do when you have a similar problem?
Figure 4: Patient Experience Scenarios Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
Pacemaker
Pacemaker Pacemaker
58
must. Third, individuals engage in an experience
co-creati on process when they i nterface
with an experience environment, interact
in the context of an event, and assimilate
experience outcomes. These call for new
levels of transparency and access from the
customers perspective, as well as the companys
perspective.
THE EXPERIENCE CO-CREATION PROCESS
Consider the environment of tracking the building
of my custom houseboat with Sumerset, one
of the largest houseboat manufacturers in the
world, in Kentucky, USA. This is how a typical
customer interaction might work. I as a customer
can access the manufacturing plant and track the
progress of my boat. Let us assume that I get a
new idea - such as moving the dining table by 4 ft.
The engineers can discuss with me the pros and
cons of doing that. We can make joint decisions
about the changes through the principle of
informed choice, recognizing the cost and quality
and safety implications of the choices made. Of
course, Sumerset can say, that beyond a certain
level of risk, they will not participate in the process.
Sumerset can also introduce me to the community
of houseboat users and I can engage in a dialogue
with them. I can evaluate my choices as I go
along, as well as learn about the possible uses of a
houseboat that I could not have imagined.
The co-creati on process gi ves the
customer a greater level of knowledge and
expertise about houseboats, and with it a
greater degree of self-esteem. Dialoguing with
Sumersets employees and tracking the progress
of the boat along the factory oor creates a sense
of emotional bonding with the product and the
company (an outcome of value to the customer).
Sumersets transparency and willingness to
dialogue enhances the customers readiness to
trust the company and believe in the quality
of its product. Access to the community of
Sumerset customers increases the consumers
enjoyment of the houseboat. Thus, the basis of
value for the customer shifts from a physical
product (with or without ancillary services) to
the total co-creation experience, which includes
co-designing as well as all the other interactions
among the consumer, the company, and, the
larger community of houseboaters. Thus, the
co-created experience outcomes depends on
the nature and level of access to the companys
employees and extended community, as well as
the level of transparency of all parties.
As the Sumerset exampl e i l l ustrates,
individuals must be able to co-construct unique
value for themselves through customer-network
interactions that facilitate contextualized
experience outcomes through dialogue, access,
risk management, and transparency (or DART for
short) the basic building blocks of co-creation:
Dialogue encourages not just knowledge
sharing, but, even more importantly,
A Portfolio of Core Experience Environments
Primary
Doctor
Remote
Doctor
Diagnostic
Clinic
Emergency
Services
PATIENT PATIENT PATIENT PATIENT
Medtronic
Emergency
Call Center
Medtronic
Medical
Support
TECHNOLOGY PLATFORMS
AND EVENT-CENTRIC
INFORMATION INFRASTRUCTURE
Figure 5: Patient Core Experience Environment
Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
Medtronic
Medical
Support
Medtronic
Medical
Support
59
shared understanding between companies
and customers. It also gives individuals
more opportunity to interject their view of
outcomes of value into the value creation
process.
Access to knowledge, tools, and expertise
helps individuals construct their own
experi ence outcomes. Access al so
challenges the notion that ownership is the
only way for the consumer to experience
value. By focusing on access to experiences
at multiple points of interaction, as
opposed to simply ownership of products,
companies can broaden their view of
business opportunities.
Ri sk management assumes that i f
consumers become co-creators of value
with companies, they will demand more
information about potential risks of goods
and services; but they may also have to
bear more responsibility for dealing with
those risks.
Transparency of information in interaction
processes is necessary for individuals to
participate effectively in co-creation mode
and engender trust between institutions
and individuals.
DART must be enabled by technical and social
infrastructures and platforms that enable
consumers to co-construct experiences
they value and Sumerset to generate business
value. Sumerset and its suppliers can learn
more about consumers and get new ideas
for design, engineering, and manufacturing.
Sumersets employeesfrom design engineers
to carpenterscan more deeply understand
consumer aspirations, desires, motivations,
behaviors, and agreeable trade-offs regarding
features and functions. Through continuous
dialogue, employees can relate their efforts to
individual consumers. The company can reduce
uncertainty in capital commitments and even
spot and eliminate sources of environmental
risk.
Experience environments are characterized
by the ability to accommodate a wide range of
context-specific experiences of heterogeneous
individuals. These experiences, by definition,
cannot be identied in detail a priori. The focus in
innovating experience environments, therefore, is
different from the traditional focus on innovating
products and services. We can develop a broad
specification based on our discussion so far.
At a minimum, the experience environments
must accommodate a heterogeneous group of
consumers- from the very sophisticated and
active to the very unsophisticated and passive;
must engage the customer emotionally and
intellectually, must accommodate the involvement
of customer communities, must explicitly
recognize the social and the technological
aspects of experience, must enable an integrated
experience for consumers through the creative
integration of products, employees, communities,
and the various customer interfaces in interactions,
including multiple channels and modalities that
facilitate access.
Thi s speci f i cat i on f or experi ence
envi r onment s may appear daunt i ng,
especially when it is juxtaposed with the
current preoccupations of managers focused
on improving the innovative capacity and
efciency of their organizations, through reduced
cycle time for product development, making
process improvements, and increasing product
variety. In contrast, innovating experience
environments means not only focusing on
experiences but allowing customers to engage
in a co-creation process of co-constructing those
experiences through interactions in the experience
environment.
60
EXPERIENCE CO-CREATION INSIDE THE
COMPANY AND ITS NETWORK
The same DART building blocks also facilitate
the interaction from the companys perspective.
One of the major challenges that companies
will face is that in some experience scenarios,
the resources of one or more entities in the
network which the company does not own, may
have a large role in co-shaping the customer
experience. For example, in the out of town
experience scenario, the patients interactions
with the emergency room (ER) at the out-of-
town hospital affect the quality of the patients
overall experience. What this means is that some
companies like Medtronic may have to become
nodal companies in the network that provide
the intellectual leadership, and the protocols and
disciplines for governing partners and people
in the network. In the world of experience co-
creation, every employee who has the ability
to directly inuence the consumer experience,
and facilitate the co-creation of value, must be
regarded as a line manager. Companies must
build information infrastructures that allow
line managers to experience the business as
consumers do, thereby achieving a new level
of personal effectiveness for human enablers
of experiences in the network. Without event-
centric information infrastructures in the
network, it is practically impossible to govern
the quality of experiences.
Now imagine yourself as a manager in
the ER. Managing the co-creation experience
demands a new capability: the ability for
managers to relate to consumer (patient)
interactions with the company (ER unit in the
hospital). This requires that managers have
the capacity to experience and understand the
business as customers do, and not merely as an
abstraction of numbers and charts. Consider a
typical emergency room in a busy downtown
hospital. Its not uncommon for about 40
percent of a hospitals patients to enter the
system through the emergency room. From a
managerial perspective, its critical to be very
efcient in moving patients through the ER as
fast as possible without compromising the level
of care. But in an emergency room, demand
is random and unpredictable. There are no
predetermined sequences of procedures that can
be followed. The need depends on the nature of
the problem; a gunshot wound, a heart attack,
and a broken arm are all different. But they all
require the services of multiple doctors, medical
staff, and technicians, as well as the physical
movement of the patient through various stages
of examination, testing, and treatment.
Real-life emergency rooms look as crazy
and chaotic as they do in TV medical dramas.
The progress of the patient, the status of various
tests, and the availability of rooms are tracked on
bits of paper, in PCs, and on whiteboards posted
in the clinic. Often the patient is too sick to talk,
and friends or relatives anxiously wait outside,
demanding information and answers from harried
hospital employees.
As a manager, how do you know what is
happening at any given point in time? Historical
analysis of two-day-old data will not help you
solve problems as it is happening, much less
improve the quality of current patient experience.
Improving patient care, on a real time basis
requires managers to answer specic questions:
Are patients waiting too long at a given station? Do
the doctors, nurses, and technicians have access to
the resources they need? Are ER capacities being
fully utilized? Is there a big queue for X-ray?
Further, how do managers at different
levelsfrom the chief nurse, the head of the
emergency services unit, to a VP of the hospital,
experience the business as patients, patients
families, technicians, doctors, and the interns
61
experience it? Is the patient waiting for more
than 45 minutes of service? How do we begin to
understand the patients co-creation experience
with the hospitals experience network?
And how do you as a line manager respond
in real time? For instance, as a head nurse, how
do you respond when two night-shift nurses call
in sickjust as a massive re across town lls the
ER with victims of smoke inhalation and burns?
And suppose you are the head of the emergency
unit, can you provide the level of required stafng
without compromising the quality of care for
other existing patients in the hospital?
Meanwhile, a VP in the hospital, a couple
of steps removed from the ER, has some questions
of his own. What is the average waiting time for a
patient to receive care? Where are the bottlenecks?
On arrival and registration? Waiting for a doctor,
for a room, for tests? And who is paying for
the care? Are insurance reimbursement rates
rising or falling? Is the ER a protable center
for the hospital, or is it incurring chronic
losses? Why?
The goal of the information system is
not to substitute the nurse, technician, or the
doctor, but to provide them with a system that
aids the amplication of weak signals based
on the real-time experiences of consumers
(patients). This means the construction of
information in the context of a specic co-creation
experience. A managers personal effectiveness
in such a situation depends on his capacity to
generate hypotheses around the problems
encountered by a patient and develop actions
in real-time.
But how can managers of large and mid-
sized companies achieve this kind of visceral
understanding of the business as experienced
by customers? How can line managers feel
and share the consumers concerns, desires,
and aspirations? How can they make their
experience of the business, on a real-time,
continuous basis, approximate to the experience
of consumers?
Building Event-centric Information
Infrastructure
An event-centric information system must start
with eventsa patient arrival, a registration
process, a visit to a X-ray roomas well as their
metricstime spent in the waiting room, time
required to make a diagnosis, and overall length
of stay. This event-centric information must be
available to the line manager in real time and
within its context of space and time. For example:
Patients are waiting forty-ve minutes for X-rays
on Wednesday mornings, or Bed assignments
are taking longer than thirty minutes.
As a manager, I should be able to
construct the experi ence from di fferent
perspectives, such as an overloaded doctor, an
anxious patient or a beleaguered technician.
I should also be able to connect with others
involved in providing services within the
system and create appropriate practices by
collaborating in real time to solve new problems.
Just as it is critical for customers to be able to
interact with communities of other customers,
managers should also be able to communicate
with practice communities to co-construct
new knowledge and co-create better customer
experiences.
As a manager, I should be able to experience
customer-company interactions as it is happening,
and understand and evaluate its implications.
Real-time information without context has
little value for me as a manager. Learning from
experiences requires that the event-centric
information system enables me to reconstruct
events. Real-time intervention is different from
the analysis of specic experiences after the fact
to search for patterns, formulate hypotheses,
62
and codify the learning. For managers to become
co-creators, they should have the capability
for real time intervention as well as after the
fact analysis. Such capabilities must be available
to every manager in the ER. That includes all the
employees involved in creating a good customer
(patient) experience, including doctors, nurses, lab
technicians, the clerk at the registration counter,
and the ward assistants who transport patients
and deliver supplies.
In todays ER, do line managers have
access to all of the capabilities weve suggested?
Usually they do notnot for technological
reasons, but because our management systems
have not yet caught up with the changing locus of
value creation in the various points of customer-
company interactions. The question is: How do
we get the managerial equivalent of embedded
intelligence, as when a smart product, like the
pacemaker, interacts with the consumer? Thus,
companies must facilitate managerial learning
and action, through co-evolution with customer
experiences, at all points of company-consumer
interaction.
Consider the digital hospital at the
Hackensack University Medical Center in
Hackensack, New Jersey.

Doctors can tap an
internal website to examine X-rays from a
computer anywhere inside and outside the
hospital, and even control a life-size robot (a
digital doc with the typical white lab coat and
stethoscope) from their laptops at home. They can
direct this digital doc into hospital rooms and
use two-way video to discuss patients conditions.
Nurses can use a PC that rolls around on an IV-
like stand and log into patient electronic records
through a wireless connection, to review vital
signs temperature, heart rate etc entered
into the computer by a nurses aide earlier. Nurses
can pull up medication orders, check accuracy of
medications and dosage levels, and even interact
with other sources of medical expertise to learn
about patient therapies. They can spend more
time on care-giving now and enhance the quality
of the patient experience. Previously, nurses
were always on the phone with the pharmacies,
trying to get medication rells, or with doctors,
trying to decipher their handwriting. Now the
system ensures that the pharmacy gets drugs to
nurses on schedule and that nurses can access
almost every piece of information they need, from
what insurance plan a patient is on to what tests
they have had in the hospital. Patients can use
TVs in their rooms to access information about
their medical conditions. While Hackensack is
reporting increased productivity, lower costs,
and simultaneously better patient outcomes, a
recent high-prole study published in the Journal
of the American Medical Association in early
March 2005 showed that a tech system used
by the University of Pennsylvanias hospital to
prescribe drugs created new ways to make errors.
In one example, it scattered patient data and drug-
ordering forms over so many different computer
windows that it increased the likelihood of doctors
ordering the wrong medications. The point here
is that technology must enable better physician,
nurse, and patient experiences. In other words,
the experience environments must be designed
for effective co-creation of experience outcomes
for various participants and stakeholders in the
network.
THE NEW ROLE OF THE CIO
CIOs have a new role in building information
infrastructure capabilities for the experience co-
creation organization. In doing so, however, they
must confront the reality of IT and challenges
in the Line manager-IT interface in enabling
experience environments, as shown in Figure 6.
I T i nf rast ruct ure mat t ers because i t
communicates the reality of consumers and their
63

Managerial Information Infrastructure Capabilities
New Demands
from Experience
Co-Creation with
Customer
Heterogeneous consumer experiences;
Contextual insights; Network
collaboration for Co-Creation; Rapid
Resource Reconguration
Managerially experiencing
the consumer experience;
Managerial heterogeneity
Ability to access, interpret,
and experiment within the
event-centric system
Building and maintaining internal
systems and processes;
Conventional innovation and
efciency mindset
The Reality of IT
The IT Interaction
Interface in Experience
Environments
Focus on
Line Managers
Figure 6: Line Manager-IT Interface
Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
experiences to line managers. The relationship
is like that between a topographical map and
the actual terrain. The more accurately the map
reects the terrain, the easier for travelers to nd
their way. The more complex the terrain, the
more detailed and accurate the map. Given the
demands of the real-time enterprise, the map must
correspond to the terrain as it is changing. Most
information systems force managers to use old
maps, since they provide only historical analysis
of data, or provide inappropriate maps that work
only with a xed set of predetermined analytical
tools. Such systems complicate exploration
and navigation. Managers must demand
event - cent ri c i nf ormat i on infrastructures
that can can support experience environments
and foster experience innovation. The goal of
the line manager (e.g., nurse, physician etc)
is to managerially experience the customer
experience in real time with zero latency.
As a care giver, I want to experience the business
as the patient does, and act when the patient
is having a problem. Thus, for companies
to migrate towards experience co-creation,
its technology platforms and event-centric
information infrastructures must support critical
experience co-creation processes underlying the
portfolio of core experience environments as
shown in Figure 7.
In this emerging new future of IT, the
new role of the CIO is elevated to supporting
experience co-creation processes that facilitate
the companys role of co-creating unique value
with its customers. To compete successfully in
the future as it is becoming, companies must
build new strategic capital the capacity to create
value through experience co-creation, rather
than the dominant logic of rm and product-
centric value creation that has indeed served us
so well over the past hundred years. The latter
is becoming inadequate (witness the declining
The Strategic Role of IT:
Enabling Experience Co-Creation Processes
Global Network of Firms
and Global Communication of Individuals
As a Source of Competence
Technology Platforms
& Event-centric
Information Infrastructure
Experience Co-Creation
Processes
P
o
r
t
f
o
l
i
o

o
f
C
ore Experience

E
n
v
ir
o
n
m
e
n
t
Figure 7: IT-Enabled Experience Co-creation
Source: Center for Experience Co-Creation,
Ross School of Business, University of Michigan
64
margins, commoditization, erosion of traditional
competitive advantage in industries across
the globe) in an era of technology and market
convergence, deregulation and privatization,
globalization, and connected, informed, and
empowered customers with the motivation and
tools to actively engage and insert themselves in
the value creation process.
To be future proof, companies must build
experience co-creation platforms and event-centric
information infrastructures that enable experience
environments, and in turn, compelling customer
experiences that generate the new revenue
enhancing opportunities. For companies to
sustain protable growth, they must continuously
generate new knowledge of customer experiences
and identify and enable new value creation
opportunities with customers. Welcome to the
new paradigm of experience co-creation!
ABOUT THE AUTHOR
Prof. Venkat Ramaswamy is one of the two founders
of the Experience Co-Creation Partnership.
He is also a Professor of Marketing and computer and
Information systems at the Ross School of Business,
University of Michigan. He is the co-author of the
acclaimed book The Future of Competition: Co-
creating Unique Value with Customers (co-authored
with C. K. Prahalad).
Venkat is a globally recognized thought leader, idea
practitioner, and an eclectic scholar with wide-
ranging interests in innovation, strategy, marketing,
branding, IT, operations and the human side of the
organization. He is a prolic author of numerous
articles, and a frequent speaker at conferences all
around the world on topics such as innovation,
customer experience, or value creation, or industry
focused topics
65
Index
Apple 54-55
Business Process Management, also BPM 5,12,
15, 17-24
Business Process Management System, also
BPMS 17-24
Co-creation 53-64
Experience 53-58, 60, 62-64
Platform 53, 57, 64
Process 53, 57-59, 63
Customer Relationship Management, also CRM
5-7, 25, 27, 29, 31, 54
Delphi 36
Enterprise Digital Dashboard, also EDD 25-26,
38-33
Enterprise Resource Planning, also ERP 18, 25,
27, 29, 31, 45, 54
Google 14-16
IBM 6, 19
Integration 3, 5-16, 20, 27, 30, 43-45
Architecture 8
Business Process, also BPI 5
Costs 27, 45
Data 30
Electronic 43-44
Enterprise Application, also EAI 5, 20
Enterprise Information, also EII 30
Layer 6, 8
Legacy 3, 5-6, 8-9
Middleware 5, 8
Real Time 5, 8
Service 19, 21, 45
Service-oriented, also SOI 8, 10, 12-16
Supply Chain 44
Technologies 5
iTunes 54
Key Performance Indicator, also KPI 25-26,
29, 33
Line of Business, also LOB 4-5, 28, 30-32
Medtronic Inc 56, 58, 60
Microsoft 6, 27
Operation Data Store, also ODS 3, 5, 7-8
Portfolio 47-51
Approach 49, 51
Governance 49
Investments 48, 50-51
Management 48-49, 51
Prioritization 47-49, 51
Product 4
Delivery and Access 4
Design 4
Launch 4
Support and Marketing 4
Management 35-38, 40-42, 49, 51, 58-59
Portfolio, see Portfolio Management
Program 37
Project 37, 41, 49, 51
Risk 35-38, 40-42, 58-59
Risks 38-41
Financial 38
Global 40
Internal Silo 38
Local 40
Political 38
Stakeholder 39
System 40
User Acceptance 39
Vendor 41
SAP 26
SOAP 12-14, 16
Sarbanes-Oxley Act 36
SeeBeyond 6
Service Level Agreements, also SLA 18, 23, 32
Service Oriented Architecture, also SOA 9-11,
13, 17-24, 33-34
66
Supply Chain Management, also SCM 25, 27,
29, 31, 54
TIBCO 6
Vitria 6
WSDL 12
XML 5, 12
SETLabs Briefings
BUSINESS INNOVATION through TECHNOLOGY
Editor
Praveen B Malla PhD
Associate Editor
Srinivas Padmanabhuni PhD
Graphics/Web Editor
Ramesh Ramachandran
Marketing Manager
Vijayaraghavan T S
Production Manager
Sudarshan Kumar V S
Distribution Manager
Suresh Kumar V H
How to Reach Us:
Email:
SETLabsBriengs@infosys.com
Phone:
+91-20-39167531
Fax:
+91-20-22932832
Post:
SETLabs Briengs,
B-19, Infosys Technologies Ltd.
Electronics City, Hosur Road,
Bangalore 560100, India
Subscription:
vijaytsr@infosys.com
Rights, Permission, Licensing
and Reprints:
praveen_malla@infosys.com
SETlabs Briengs is a quarterly published by Infosys Software Engineering
& Technology Labs (SETLabs) with the objective of offering fresh
perspectives on boardroom business technology. The publication aims at
becoming the most sought after source for thought leading, strategic and
experiential insights on business technology management.
SETLabs is an important part of Infosys commitment to leadership
in innovation using technology. SETLabs anticipates and assesses the
evolution of technology and its impact on businesses and enables Infosys
to constantly synthesize what it learns and catalyze technology enabled
business transformation and thus assume leadership in providing best of
breed solutions to clients across the globe. This is achieved through research
supported by state-of-the-art labs and collaboration with industry leaders.
Infosys Technologies Ltd (NASDAQ: INFY) denes, designs and delivers
IT-enabled business solutions that help Global 2000 companies win in a
at world. These solutions focus on providing strategic differentiation
and operational superiority to clients. Infosys creates these solutions
for its clients by leveraging its domain and business expertise along
with a complete range of services. With Infosys, clients are assured of a
transparent business partner, world-class processes, speed of execution
and the power to stretch their IT budget by leveraging the Global Delivery
Model that Infosys pioneered. To nd out how Infosys can help businesses
achieve competitive advantage, visit www.infosys.com or send an email to
infosys@infosys.com
2007, Infosys Technologies Limited
Infosys acknowledges the proprietary rights of the trademarks and product names of the other companies
mentioned in this issue. The information provided in this document is intended for the sole use of the recipient
and for educational purposes only. Infosys makes no express or implied warranties relating to the information
contained herein or to any derived results obtained by the recipient from the use of the information in this
document. Infosys further does not guarantee the sequence, timeliness, accuracy or completeness of the
information and will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions
of, any of the information or in the transmission thereof, or for any damages arising there from. Opinions and
forecasts constitute our judgment at the time of release and are subject to change without notice. This document
does not contain information provided to us in condence by our clients.
Editorial Office: SETLabs Briefings, B-19, Infosys Technologies Ltd.
Electronics City, Hosur Road, Bangalore 560100, India
Email: SetlabsBriefings@infosys.com hhttp://www.infosys.com/technology/SETLabs-briefings.asp
Authors featured in this issue
AKASH SAURAV DAS
Akash Saurav Das is a Software Engineer with SETLabs. His research focuses on Web Services, UDDI and web services
security. He can be reached at akash_das@infosys.com.
ANUBHUTI BHARILL
Anubhuti Bharill is a Senior Analyst with the Europe Middle East and Africa business unit, Infosys. She can be reached at
anubhuti_bharill@infosys.com.
BIJI NAIR
Biji Nair is a Project Manager with the Europe Middle East and Africa business unit, Infosys. She can be reached at biji_
nair@infosys.com.
BINOOJ PURAYATH
Binooj Purayath is a Principal Architect with the Technology Consulting Group, Infosys where he focuses on SOA and Legacy
Modernization. He can be reached at binooj_purayath@infosys.com.
DAYASINDHU NAGARAJAN
Dayasindhu Nagarajan, PhD is a Principal Researcher at the Software Engineering Technology Labs, Infosys where he leads
the IT Management Group. His research and consulting interests are in business-IT alignment, IT portfolio management
and IT Governance. He can be reached at dayasindhun@infosys.com.
JAI GANESH
Jai Ganesh, PhD is a Senior Research Associate at the Web Services Centre of Excellence in SETLabs, Infosys Technologies.
His research focuses on Web services, SOA, Web 2.0, IT standards, IT strategy, Adaptive enterprises and Hypercompetitive
businesses. He can be reached at Jai_Ganesh01@infosys.com.
JAMUNA RAVI
Jamuna Ravi is a Vice-President at Infosys Technologies Limited. She has over 20 years experience in the IT industry and is
currently managing service delivery for North America based Banking and Capital Markets customers with a team of over
7000 geographically dispersed people. She can be reached at jamuna_ravi@infosys.com.
MANISH SRIVASTAVA
Manish Srivastava is a Principal Architect with Infosys Technologies. He is leading the solution development efforts for the
Infosys Microsoft joint technology led transformation program called Catalytic IT. He can be reached at manishsv@infosys.
com.
PARAMESWARAN SESHAN
Parameswaran Seshan is a Senior Technical Architect with Software Engineering Technology Labs, Infosys. His areas of
interest include BPM, Web services, enterprise JAVA, Knowledge engineering. He can be contacted at parameswaran_
seshan@infosys.com.
SANJAY MOHAN
Sanjay Mohan is a Client Partner with Infosys Technologies with over 14 years of visible achievements providing technology
led business solutions and improvement of IT organizations / IT management practices. He can be reached at sanjay_
mohan@infosys.com.
SIVA NANDIWADA
Siva Nandiwada is Head of Strategic Planning for Insurance, Healthcare & Life sciences business unit, Infosys. He has
several years of experience in managing large scale technology led business transformation Programs, Consulting, client
relationships and strategic planning. He can be contacted at siva_nandiwada@infosys.com.
SRIRAM ANAND
Sriram Anand PhD was a Principal Researcher with the Web Services Centre of Excellence at the Software Engineering &
Technology Labs, Infosys.
SRIRAM PADMANABHAN
Sriram Padmanabhan is a Senior Engagement Manager at Infosys with the responsibility to manage one of Infosys largest
engagements in the capital markets industry. He has managed several software projects for large banks and financial
institutions across geographies. He can reached at psriram@infosys.com.
Subu Goparaju
Vice President
and Head of SETLabs
At SETLabs, we constantly look for opportunities to leverage tech-
nology while creating and implementing innovative business solu-
tions for our clients. As part of this quest, we develop engineering
methodologies that help Infosys implement these solutions right rst
time and every time.
For information on obtaining additional copies, reprinting or translating articles, and all other correspondence,
please contact:
Telephone : 91-80-41173878
Email: SetlabsBriengs@infosys.com
SETLabs 2007, Infosys Technologies Limited.
Infosys acknowledges the proprietary rights of the trademarks and product names of the other
companies mentioned in this issue of SETLabs Briengs. The information provided in this document
is intended for the sole use of the recipient and for educational purposes only. Infosys makes no
express or implied warranties relating to the information contained in this document or to any
derived results obtained by the recipient from the use of the information in the document. Infosys
further does not guarantee the sequence, timeliness, accuracy or completeness of the information and
will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions of,
any of the information or in the transmission thereof, or for any damages arising there from. Opinions
and forecasts constitute our judgment at the time of release and are subject to change without notice.
This document does not contain information provided to us in condence by our clients.

You might also like