You are on page 1of 17

Imagine being able to spin the such a success each year – providing clients and

clock forward to peek into the partners with an impartial take of the world and
world of the future – to wonder at the evolution of IT across business, economic
the advances of society, business and natural systems.
and technology five to ten years
from now. This year’s GTO is no different.

That’s exactly what IBM Research has strived to When IBM set its sights on helping to create a
do since 1982 with the Global Technology smarter planet two years ago, we predicted the
Outlook (GTO). IBM Research and its global evolution of an increasingly instrumented,
community of some of the world’s top scientists intelligent and interconnected world. Since
consider the cultural and business applications that time, the application of technology has
in which technologies will be used – and the helped create real-world monitoring and data
impact they will have on IBM and the world. prediction for water systems, cities and
municipalities, utilities and power grids,
The GTO provides a comprehensive analysis of financial and logistics providers, and traffic and
that vision. In past years the GTO has transportation entities.
predicted such trends as autonomic and cloud
computing, the rise of nanotechnology, and the The next phase of smarter planet looks to build
ability to create smarter cities and smarter upon that progress and momentum. How can
systems through the application of analytics. these systems be optimized, adapted and reused
to create increasingly intelligent modeling and
An intensive year of work goes into each GTO data-based prediction? Can technology help
– generating the ideas, gathering data, debating transform healthcare to provide better patient
the issues with colleagues and peers, and then outcomes while reducing costs? Will advances
presenting a final report to IBM Chairman and in software help business leap past the
CEO Samuel Palmisano. Throughout the performance limitations of hardware? Can new
process, the analysis of the GTO topics focuses computing models like cloud and hybrid
on both the technology at hand and the computing help manage legacy hardware,
business and societal implications of each trend processes and applications? What can be
being analyzed. learned by looking across multiple systems and
platforms that can help optimize workloads?
Inside IBM, the GTO is used to define areas of
focus and investment. Externally, it is shared The 2010 Global Technology Outlook that
broadly with a range of IT influencers - follows takes a far-reaching view into these
including clients, academics, and partners - challenges at a global level and offers insights
through education and client briefings. into the future. This report is designed for
your organization to benefit from them as
This outlook is not designed to singularly much as we have here at IBM.
benefit IBM. In fact, some of the trends
explored may be well beyond IBM’s portfolio of
offerings. And that’s what makes the GTO

Introduction
The increasing complexity of Today, emerging early diagnostics can lead to
healthcare is a major contributor to earlier intervention, resulting in improved
rising costs. At the same time, outcomes and lower costs at the front end. But
healthcare is becoming an IT-based what emerges at the back end are reams and
industry, based on a continuum of reams of data that, when added to the data
machine generated digital already out there, potentially complicates
information. decision making at the point of care.

To improve healthcare we must be able to make


data more usable by synthesizing evidence from
it, making the data more readily available at the
point of need.

Improvements in the delivery of healthcare


increasingly will rely on having usable evidence
readily available at the point of care. Systems
that are designed to provide clinical decision
intelligence and evidence-based assistance can
improve the ability of providers to anticipate
need and tailor delivery.

Evidence generation
The process of transforming data into evidence
is called the generation phase. It is in this phase
that comparative effectiveness and practice-
based evidence studies, using the best available
evidence gained from the scientific method, are
used to make medical decisions.

To accomplish this, cloud computing and data


trust infrastructures are used for data collection,
aggregation, integration, information
management and curation. The resulting
federated data structures then are available for
evidence generation using smart analytics tools.

The ensuing evidence whether in evidence


registries or other forms then can be easily
synthesized, extracted, disseminated and
consumed by patients, providers, regulators,
researchers, and wellness management services.

Page 1
Service quality
Through business process transformation and
increasing the use of IT to generate and process
data and to generate data-based evidence —
care providers and related entities can improve
workflow efficiency, increase safety and reduce
errors. This will enable new collaborations
among the various entities in the healthcare
ecosystem, such as providers, insurers, payers,
patients, pharmacies and regulators. It also will
enable new delivery models, such as patient-
centered collaborative care.

Novel incentives
Insurers and services firms may offer financial
incentives to healthcare providers and
practitioners to adopt evidence-based systems,
allowing them to reduce costs while improving
patient care.

Evidence-informed, outcome-based payment


modeling and contract optimization services
will help create and sustain a virtuous cycle
where outcome-based payment incentives lead
to improved outcomes and demand for
evidence at the point of need. This drives large
scale evidence generation and comparative
effectiveness which in turn enables the creation
and continued evolution of an evidence-centric
healthcare ecosystem.

Conclusion
As healthcare is evolving to become an
increasingly IT-based industry, evidence rapidly
is becoming its currency. It accelerates the
transport and availability of data; enables new
modeling capabilities, analysis and
consumption; and does so while simplifying a
complex industry.

Page 2
The vision of a smarter planet – IBM CEO Sam Palmisano challenged IBMers
one that is highly instrumented, to create 300 smarter solutions in partnership
interconnected and intelligent – is with clients in 2009. More than 1,200 examples
one that already is in motion. were brought forward in every major industry,
Projects underway in smarter city both in the developed and developing world.
programs in locations as diverse as
the Isle of Malta; Dublin, Ireland; As a result, hundreds of projects are underway
and Dubuque, Iowa, are laying the with clients and partners ranging in size from
groundwork. local fire and police departments to country-
level governments. Transportation systems,
financial systems, and energy and transmission
systems are being monitored; environmental
systems including tides and currents being
measured; and millions of transactions are being
captured and analyzed using technology and
solutions designed to help them be better
understood, managed and made smarter.

As these systems become more pervasive, we


will have unprecedented real-time visibility into
power grids, traffic and transportation systems,
water management, oil & gas supplies, and
financial services. And at some point, we will
need them to share their information more
effectively and ‘talk’ to each other as part of a
larger framework.

Modeling the smarter planet


A smarter planet solution is a closed-loop
‘system of systems’. The starting point of such a
solution is always the real world itself – whether
it is a smarter grid, building, supply chain or
water system. The instrumentation integrated
into these systems provides the mechanism to
capture real world observations through digital
models. Since the real worlds are
interconnected and interdependent, the
modeled worlds will be too.

Connecting these systems will require digital


representations to help assess the complexity,
maneuver through environmental variables and
achieve more predictive outcomes.
Page 3
These models will help assimilate and stitch Conclusion
together the captured data, helping to Hundreds of projects designed to make a
interpolate or extrapolate into areas where data variety of systems more instrumented,
is not yet available. This will help generate a integrated and intelligent already are underway
predictive analysis to reach the most plausible globally. Systems as varied as utilities, energy
theory to explain the available information. and transmission grids, waterways and financial
Assistive technologies and increasingly transactions are being measured and monitored
intelligent instrumentation and modeling then to help make them smarter.
can be applied to real world systems.
A platform for enabling and facilitating
As the components for analyzing these systems synthesis and orchestration across these
become more standardized, data from other environments and multi-modal information
types of models - business and enterprise systems is potentially the most promising
systems, physical IT networks, social anchor point for creating repeatable smarter
networking, industry frameworks, behavior planet solutions.
models - can then be captured and shared
across platforms from varying vantage points. There also are broad opportunities as a result of
the exploration of broader coverage of
It is through orchestrating these models that we interconnected world models to unlock insights
will be able to capture the digital and produce predicted business outcomes.
representations of how the business, human,
and physical worlds interact and to predict,
manage, and provide continuous progress in a
smarter planet.

Page 4
In the near future, three software Upper middleware
trends spanning the entire software Today’s complex application portfolios,
stack will significantly impact including legacy, custom and packaged
enterprises and their IT support. applications, support important operational
functions but limit the ability to innovate,
First, at the top layer, or business layer, upper differentiate and compete. The applications’
middleware will give executives increased fixed business processes, rules and information
visibility and control over how their business models frequently make it expensive and time-
operates. consuming to alter operations to reflect the
changing — often dynamic — needs of
Next, at the platform layer, new control businesses. As a result, enterprises operate on
technologies will enable clients to dynamically application logic as opposed to business needs,
determine which elements of their IT stay in- making it increasingly difficult to shed obsolete
house and which elements are “outsourced” to applications and innovate to meet those needs.
a cloud computing provider.
However, a confluence of emerging
Finally, the development of a new technologies — including improved modeling,
programming language can help software analytics, rules engines, discovery mechanisms
developers and consumers more easily access and monitoring tools — is making it possible to
parallel computing abilities through multi-core return more control of business operations to
and hybrid architectures. business executives.

Upper middleware offers a new perspective on


business operations and uses new approaches to
business-level models to describe, in business
terms, how a business needs to operate. This
enables business users to define and manage
end-to-end processes and optimize business
outcomes. It also enables business users to think
holistically about the processes, information,
business rules and analytics in a business
context. Then, discovery mechanisms helps
business users understand how to leverage their
current application environment and other IT
investments to innovate operationally.

Client controlled cloud


Cloud computing has the potential to become
the dominant means for delivery of computing
infrastructure and applications on the Internet.
As cloud takes hold, the control components of
software that are used to create, configure and
adjust the nature of applications will be
Page 5
separating out from the cloud data functions Multicore programming model
and moving to the edge of the enterprise. Due to power issues, hardware alone no longer
can provide the single-thread performance
A significant result of cloud computing is the increases that the industry has seen over the
rapidity and ease of creating new computing past few decades. Instead, performance
functions and services in the network. increases will come from parallelism, either
However, developing new software applications homogeneous (via multicore processing) or
may be of limited use to established businesses heterogeneous (via hybrid systems). In addition
that have a significant amount of existing data to significantly increasing applications that
sources and applications. demand more performance, this shift may also
affect the productivity of software development.
New value is more commonly derived by In the past, parallelism was optional and
combining the existing data and applications attracted expert programmers, but from now
with new types of processing functions. This on, all applications that require a certain level
combination process creates a network of of performance will need to exploit parallelism.
applications and data services, jointly referred
to as computing services. The advent of cloud Unfortunately, the number of skilled
computing can help enterprises derive business programmers that can thrive with existing
value by keeping some critical IT elements in- parallel programming models is not growing,
house and outsourcing other elements into the while new application domains are leading
cloud. towards an increasing number of non-expert
programmers writing software programs.
The challenge, however, is maintaining control
of policies and guidelines that cloud computing Although many companies are exploring new
outsourcing entails. programming models for the multicore era,
most efforts are C/C++ derivatives, which will
Although the separated control functions can narrow the set of programmers who can safely
take many different forms, one possible exploit parallelism, and in the process losing the
manifestation could be that these functions productivity gains that have been realized with
emerge in the form on an on-premises Java.
appliance system or software layer. The on-
premises system will result in the creation of In contrast, IBM is developing X10, an
what we are calling a Client Controlled Cloud evolution of the Java programming language for
(C3). In the C3 paradigm, computing services concurrency and heterogeneity, leveraging
are created and composed from a variety of more than five years of research funded by the
cloud providers and existing services in the Defense Advanced Research Projects Agency
enterprise to create solutions for the enterprise. (DARPA). IBM is investing significantly in
However, the control and management of such developing the ecosystem around X10 with
solutions remains strongly within the significant collaborations with two dozen
administrative domain of the enterprise client. universities.

Page 6
X10 allows programmers to expose fine-grained
concurrency in their applications, which should
be a good match for the multicore era. It also
enables programmers to naturally describe
heterogeneous computation at the language
level without the tedious message-passing
constructs. X10 has been used to demonstrate a
6x productivity improvement over the C
message passing interface.

Conclusion
Software will play an increasingly important
role in helping to achieve new computing
performance standards and maximizing the
potential of distributed or workload optimized
computing systems.

Upper middleware will help provide a unified


dashboard view to non-technical users across
layers, provide a holistic business view that will
enable better decision making and improve the
ability to shed legacy applications for more
transformative ones. Enhanced discovery tools
will help this operational innovation model
evolve.

As cloud computing becomes more pervasive


and easier to implement, automatic
provisioning at the platform layer will simplify
decision making on which functions and
applications can be outsourced or move to the
edge of the enterprise.

By adopting the X10 programming language,


software developers will contribute to the future
of software-based computing acceleration across
increasingly interoperable, scalable, parallel
computing systems.

Page 7
The average corporation spends Many more are spending close to 85 percent of
around 70 percent of its IT budget their IT budget for ongoing operations, leaving
to keep existing operations going, only 15 percent available for innovation. This
with a significant portion allocated level of constraint leaves these companies at a
to maintain and improve their high risk to completely lose their IT-driven
legacy equipment. business agility.

Highly efficient companies, on the other hand,


spend only 60 percent of their IT budget for
ongoing operations, leaving 40 percent for new
and innovative initiatives. Such companies have
a competitive edge based on continued high
business agility and the ability to constantly
differentiate.

Legacy is by no means limited to ‘old


hardware.’ It actually is found across the entire
IT value stack. As soon as a business model,
process, software, data format or infrastructure
is deployed, it is considered to be legacy. It even
includes soft factors like 'know-how' legacy.
And as with all value stacks, handling it in an
integrated way results in the highest value.

New and emerging technologies and new


business realities are changing the game for
legacy efficiency improvements across the
legacy value stack. These include:

Page 8
The integrated legacy control loop Last but not least, the ‘Operate’ aspect of the
To help manage and actively handle these legacy control loop ensures continued
legacy classes in a repeatable pattern, a legacy operation, and includes actions to reduce future
control loop should be implemented. This legacy issues. As part of this, a legacy council
control loop would feature three defined would, for example, review all IT procurements
positions: in order to ensure that new acquisitions are
‘legacy proven’.

Conclusion
In order to maintain high business agility and
the ability to differentiate, constant and
Each position in the loop would feature a repeated legacy efficiency improvement with
variety of actions supported by corresponding double digit savings per year in ongoing IT
tool sets and value prediction tools. operations is a must.

There are two ways to apply the legacy control To achieve this, two key imperatives have to be
loop. The legacy control cycle can be applied followed. First, legacy efficiency improvements
selectively to one legacy class, even to one have to be applied in an integrated fashion
single legacy item. But when applied in the across the legacy value stack layers and not just
form of an integrated legacy service offering to one layer at a time. Second, a structured
across several classes, it provides significantly approach needs to be leveraged to guarantee
higher value. repeated savings, which are achieved by
repeatedly applying the legacy control loop
For the ‘Identify’ position of the control loop, a with the three basic steps of identify - improve
legacy dashboard & workbench can be used to – operate.
map applications, data and infrastructure to a
business process – and vice versa. This would
help provide valuable insight into the complex
relationships between infrastructure, data &
applications and the related business processes.
It also could establish a legacy inventory to help
facilitate and maintain continued optimization.

The ‘Improve’ position of the legacy control


loop includes all actions suitable to positively
changing the legacy system. While that part of
the legacy considered to be a burden is reduced,
the heritage of the legacy is fostered and better
leveraged. A comprehensive tool suite would
help leverage insights gained from the legacy
dashboard.

Page 9
The ongoing, exponential increase Spectrum constraints are due to both the
in wireless traffic globally is driving limitations of current radio-frequency
the need for more efficient use of allocation and the limitations of radio
both radio spectrum and wireless interference. The wireless infrastructure
infrastructure capacity. capacity is constrained by network bandwidth
limitations, upgrade costs and logistics, and the
legacy of existing installations.

Both sets of constraints are becoming


bottlenecks. Addressing them offers significant
opportunities for holistic, system-wide
optimizations that can create new efficiencies
and new business opportunities.

From an application perspective, there are three


major areas driving significant growth of
wireless traffic:

Page 10
required to reduce latency for media
applications and to enable advanced services to
overcome performance variability in the
wireless network.
The existing wireless infrastructure
Current wireless network architecture has three New approaches for the future
regions: the radio access network (RAN), the To address these constraints and challenges,
backhaul to the radio network controller (via new technologies are emerging for the future
microwave or fiber), and the core network. wireless infrastructure.

Each of these three architectural regions poses


challenges:

The RAN is limited by the reusable spectrum


for wireless mobile communication. For
instance, the data-intensive applications
popularized by devices such as the Apple
iPhoneTM are challenging the cellular wireless
network’s ability to effectively support user hot
spots. The wireless industry is trying to solve
these issues by making wireless cells smaller in
hot spots and using more complex signal
processing to improve spectral efficiency. The
smaller cells will drive the need to manage
radically distributed systems, and the signal-
processing intensive solutions drive the need for
new and more powerful systems and system
architectures.

Backhaul capacity is limited primarily due to the


proliferation of cells to address spectrum re-use.
Current industry approaches to address the
growth of aggregate traffic involve upgrades of
microwave backhaul or installation of fiber
backhaul, both of which require substantial
capital investments. It is important to reduce
the growth of traffic over backhaul links using
approaches such as content caching and traffic
shaping at the wireless edge.

Core networks involve multiple tiers, typically


with large latency for media applications. A
flatter and more intelligent core network is
Page 11
Conclusion
In the future, given the huge anticipated growth
in wireless bandwidth requirements, and the
backdrop of both spectrum and infrastructure
constraints, there will be a huge opportunity for
new systems and solutions to enable future edge
of wireless network services.

A key dimension of this opportunity will be the


ability to enable & optimize service quality,
mobility, and cost using computational IT
platforms, tools, practices and approaches. This
profound shift represents a new paradigm of
wireless and IT infrastructure convergence.

Page 12
Over the next several years, there IT suppliers, including IBM, will address these
will be a significant transformation problems by introducing Workload Optimized
of IT delivery, with increasing Systems (WOS) that are pre-integrated and
focus on manageability, reduction optimized for important application domains,
in operating costs, and improving such as data analytics, business applications,
time-to-value for delivering value web/collaboration, compliance and archiving.
from IT to line of business owners
and customers. These WOS will deliver transformational
improvements in client value by integrating and
This will include a selective adoption of optimizing the complete system at all layers.
“private Clouds” and cloud services and a re-
thinking of the traditional layer-by-layer For example, a WOS for data analytics could
approach of building and managing IT systems, transform user interaction with large data
with their multiple layers of applications, warehouses by reducing the variance in the time
middleware, infrastructure software, servers, to satisfy a user query to the data warehouse.
networks, and storage.
Workload optimized continuum
In the traditional approach, integration of the The client value provided by WOS includes
overall “system” happens only in the customer improvements in functionality, reliability,
data center, resulting in significant integration availability, time to deployment (which is also
cost, elongated deployment time, and legacy time-to-value), as well as reductions in
management complexity. The resulting operating costs through better manageability
environment is difficult to adapt to changes in and improved energy efficiency.
demand and usage.
In current industry practice, we see increasing
functionality and client value with
improvements in individual components over
time. For instance, processing based on
transistor density has been increasing with
Moore’s Law (2X every two years). However,
the resulting boost in performance increasingly
is in the form of more parallelism (more cores,
more threads per core). Delivering application
level increases in performance will require
optimization of the software stack to exploit this
parallelism. If such changes are not made,
progress will not keep pace.

In this GTO topic, a WOS continuum is


defined that demonstrates an increase in client
value – at a rate that is much higher than
current industry practice. The following
describes three levels in the WOS continuum:
Page 13
integration, customized integration and classes of workload optimized systems, IBM
hardware/software co-optimization: researchers have discovered that there are
common patterns found in them. As this new
paradigm of systems establishes itself, it is
anticipated that only a few platforms -
numbering in the single digits – will emerge to
support workload optimized systems.

A platform for WOS will comprise common


components like:

These systems will require system-wide co-


optimization of hardware and software, and will
exhibit a tight connection between software,
compute elements, extended memory and
storage elements, and high-speed network. As
new technologies emerge in these areas, WOS
can exploit the new co-optimization
opportunities across the HW/SW stack.

Conclusion
Increasing client value will be delivered through
different levels of integration moving through
the continuum of WOS to deliver high levels of
co-design and co-optimization across the
hardware and software stack. Successive
refinement and expansion of customer value
through a multi-year roadmap will be necessary
to overcome any perception that a WOS is a
one-of-a-kind hardware solution for each
Infrastructure and technology application and that many application/workload
In addition to looking at some WOS examples, classes can be supported by a small set of
one also must investigate the underlying system workload optimized platforms.
infrastructure. In looking at many important
Page 14
International Business Machines Corporation
New Orchard Road, Armonk, NY 10504

IBM, the IBM logo, ibm.com, are registered trademarks or trademarks of International Business Machines Corporation
in the United States and/or other countries. Other company, product and service names may be trademarks or service
marks of others. © Copyright IBM Corporation 2009. All rights reserved.

You might also like