You are on page 1of 15

Integrated Reservoir Interpretation

Reservoir management, integrated data interpretation, multidisciplinary asset teams, synergythese are the
buzzwords of modern reservoir engineering. They point to the efficient use of all types of data to better
understand reservoirs and to ultimately produce more hydrocarbon less expensively. But is vision outpacing
the tools available? We describe how better reservoir understanding is being achieved in practice.
Ed Caamano
Ken Dickerman
Mick Thornton
Conoco Indonesia Inc.
Jakarta, Indonesia
Chip Corbett
David Douglas
Phil Schultz
Houston, Texas, USA
Roopa Gir
Barry Nicholson
Jakarta, Indonesia
Dwi Martono
Joko Padmono
Kiagus Novias
Sigit Suroso
Pertamina Sumbagut
Brandan, North Sumatra, Indonesia
Gilles Mathieu
Clamart, France
Zhao Yan
China National Petroleum Company
Beijing, China
For help in preparation of this article, thanks to Tom
Bundy, Conoco Indonesia Inc., Jakarta, Indonesia; Gilles
Bitoun and Rune Hope, Total Indonesie, Jakarta, Indonesia; Dharmawan Samsu, Arco, Jakarta, Indonesia; Bill
Harmony, Gerry Dyer and John Rice of Maxus, Jakarta,
Indonesia; Ron Boulter, Beijing, China; John Bradfield,
Abraham Baktiar and Ron Mobed, GeoQuest, Jakarta,
Indonesia; Hashem Bagherpour, Mustafa Biterge, Graham Bunn, Andrew Carnegie, Metin Karakas, Bahman
Samimi, GeoQuest, Dubai, United Arab Emirates; Ben
Lovell, Simon Robson, Steve Simson and Robert
Sorensen, GeoQuest, Gatwick, England; Christine
Economides, Schlumberger, Houston, Texas, USA; Douglas Gray-Stephens, Schlumberger Cambridge Research,
Cambridge, England.

50

Every field is unique, and not just in its geology. Size, geographical location, production
history, available data, the fields role in
overall company strategy, the nature of its
hydrocarbonall these factors determine
how reservoir engineers attempt to maximize production potential. Perhaps the only
commonality is that decisions are ultimately
based on the best interpretation of data. For
that task, there is a variability to match the
fields being interpreted.
In an oil company with separate geological, geophysical and reservoir engineering
departments, the interpretation of data tends
to be sequential. Each discipline contributes
and then hands over to the next discipline.
At the end of the line, the reservoir engineer
attempts to reconcile the cumulative understanding of the reservoir with its actual
behavior. Isolated from the geologist and
geophysicist who have already made their
contributions, the reservoir engineer can
play with parameters such as porosity, saturation and permeability, but is usually
barred, because of practical difficulties, from
adjusting the underlying reservoir geometry.1
This scenario is giving way to the integrated asset team, in which all the relevant
In this article, ELAN (Elemental Log Analysis), Finder,
Fortress (Formation Reservoir Test System), GeoFrame,
Geopulse, Geoshare, GeoViz, IES (Integrated Exploration
System), LogDB, MeshBuilder, ModelBuilder, RFT
(Repeat Formation Tester), RM (Reservoir Modeling) TDT
(Thermal Decay Time) and WellTie are marks of Schlumberger; Eclipse is a mark of Intera ECL Petroleum Technologies; Excel is a mark of Microsoft Corporation; SigmaView is a mark of Western Atlas; Zycor is a mark of
Landmark Graphics Corp.
1. For an overview of reservoir management:
Briggs P, Corrigan T, Fetkovich M, Gouilloud M, Lo Tw, Paulsson B, Saleri N, Warrender J and Weber K:
Trends in Reservoir Management, Oilfield Review 4,
no. 1 (January 1992): 8-24.

disciplines work together, hopefully in close


enough harmony that each individuals
expertise can benefit from insight provided
by others in the team. There is plenty of
motivation for seeking this route, at any
stage of a fields development. Reservoirs
are so complex and the art and science of
characterizing them still so convoluted, that
the uncertainties in exploitation, from
exploration to maturity, are generally higher
than most would care to admit.
In theory, uncertainty during the life of a
field goes as follows: During exploration,
uncertainty is highest. It diminishes as
appraisal wells are drilled and key financial
decisions have to be made regarding expensive production facilitiesfor offshore
fields, these typically account for around
40% of the capital outlay during the life of
the field. As the field is developed, uncertainty on how to most efficiently exploit it
diminishes further. By the time the field is
dying, reservoir engineers understand their
field perfectly.
A realistic scenario may be more like this:
During exploration, uncertainty is high. But
during appraisal, the need for crucial decisions may encourage tighter bounds on the
reservoirs potential than are justifiable.
Later, as the field is developed, production
fails to match expectations, and more data,
for example 3D seismic data, have to be
acquired to plug holes in the reservoir
understanding. Uncertainties begin to
increase rather than diminish. They may
even remain high as parts of the field
become unproducible due to water breakthrough and as reservoir engineers still
struggle to fathom the fields intricacies.

Oilfield Review

Lake Baikal

M ONGOLIA
K AZAKHS TAN

Karamay

Zger basin

Urumqi
XIN JIAN G

Kashgar

Ta r i m b a s i n
GAN SU

QUIN GH AI

500 km
0

500 Miles

nXinjiang province in western China, where the RM Reservoir Modeling package has
been selected for several oil fields. Since the early 1990s, China National Petroleum
Company (CNPC) has placed increased emphasis on reservoir characterization to better estimate field reserves and to optimize development drilling.
Xinjiang province is best known for the Tarim basin, the largest basin in the world
still awaiting significant exploration. The RM package is being deployed farther north
in the Zger basin that contains the large Karamay oil field. Three new fields have been
recently discovered here and the RM package coupled with GeoQuest seismic interpretation software played a key part in their discovery. Two of these fields are currently
undergoing development drilling, and the drilling success rate has exceeded 90%.
Asset teams go a long way toward maximizing understanding of the reservoir and
placing a realistic uncertainty on reservoir
behavior. They are the best bet for making
most sense of the available data. What they
may lack, however, are the right tools.
Today, interpretation is mainly performed on
workstations with the raw and interpreted
data paraded in its full multidimensionality
on the monitor. Occasionally, hard-copy
output is still the preferred mediumfor
example, logs taped to walls for correlating
large numbers of wells.
There are workstation packages for 3D
seismic interpretation, for mapping, for
viewing different parts of the reservoir in
three-dimensions, for petrophysical interpretation in wells (see Beating the Exploration
Schedule with Integrated Data Interpretation: Cam Oils Experience, page 10 ), for
performing geostatistical modeling in
unsampled areas of the reservoir, for creating a grid for simulation, for simulating
reservoir behavior, and more. But for the
reservoir manager, these fragmented offerings lack cohesion. In a perceived absence
of an integrated reservoir management
package, many oil companies pick different
packages for each specific application and
then connect them serially.

July 1994

For example, this could be the workstation/package lineup for XYZ Oil Company:
GeoQuests IES Integrated Exploration
System for seismic interpretation
Landmark Graphics Corp.s Zycor mapping package for mapping
Western Atlas SigmaView package for
seismic inversion
Stratamodel Inc.s package for geological
model building and 3D visualization
GeoQuests GeoFrame platform for petrophysical log interpretation
Interas PVT analysis, gridding and Eclipse
simulation packages for reservoir engineering
GeoQuests Finder package for database
management
Microsoft Corporations Excel spreadsheet
program for collating data and making
reserve estimates.
Any number of combinations is possible.
The choice depends on oil company preferences, the history of the field and the problem being addressed. Modeling a mature
elephant field in the Middle East with hundreds of wells and poor seismic data may
require a different selection of tools than a
newly discovered field having high-quality
3D seismic coverage and a handful of
appraisal wells. Reservoir management
problems vary from replanning an injection
strategy for mature fields, to selecting horizontal well trajectories for optimum recov-

ery, to simply estimating the reserves in a


new discovery about to be exploited.
Whatever the scenario, the tactic of stringing together diverse packages creates several
problems. First is data compatibility. Since
the industry has yet to firm up a definitive
geoscience data model, each package is
likely to accept and output data in slightly
different ways (see Managing Oilfield Data
Management, page 32 ). This forces a certain amount of data translation as the interpretation moves forwardindeed, a small
industry has emerged to perform such translation. Second, the data management system
supporting this fragmented activity must
somehow keep track of the interpretation as
it evolves. Ideally, the reservoir manager
needs to know the history of the project,
who made what changes, and if necessary
how to backtrack. Third, and most important, the tactic of stringing together fragmented packages discourages integrated
interpretation. Crudely put, as the interpretation progresses, putting things into reverse is
always more of a hassle than continuing to
move forward. It takes an iron will to accept
that anomalous production data may mean
going back to the beginning and rethinking
the basic reservoir description.
The answer to all these problems is a fully
integrated package that performs most, if
not all, the functionality listed above for
XYZ Corporation, a package that is easy to
load with data and that keeps the data
together in a unified format, a package that
tracks the progress of the interpretation, a
package that is easy to use, and, finally, a
package that in its construction draws geologist, geophysicist, petrophysicist and reservoir engineer into close and constant interaction. The quest for this ideal has not been
easy. One solution, GeoQuests RM Reservoir Modeling package, has proved successful worldwide, especially in South America
and Asia (above ).

51

Data loading
Seismic

Logs

Petrophysics

Geology

Production

Data loading
and QC

Well data

Seismic data

Correlation

Seismics
Pick
geologic
tops

Borehole/
surface
seismic match
Tie seismic
horizons and
geologic tops

Impedance
inversion
Attribute
mapping

Velocity mapping

Correlate and
assign layer
shapes

Geologic
modeling

Depth horizons

Reservoir parameters
Seismic attributes

Seismic-guided
log property
mapping

Petrophysics
averaging

Reservoir model

Geology data

3D reservoir
model building

Reservoir engineering

Reservoir data

Material balance

Simulation

nModule functionality and data flow


in GeoQuests RM package.

52

Oilfield Review

The RM package presents geophysics,


geology, petrophysics and reservoir engineering on one workstation, deals with a
single data base that is loaded with all relevant data, and allows multiple interpretations and backtracking as the reservoir
model is firmed up. Its functionality is leading edge in some areas, and standard in others. Its main asset, however, is that it permits
an integrated approach. Let us peruse its
functionality, drawing on case studies from
a new offshore producing field of Conoco
Indonesia, Inc., a middle-aged field in the
North Sea, and Pertamina Sumbaguts producing field, Parapen, in North Sumatra.2
Although the RM package is highly modularthere are over 20 separate modules
the package basically provides tools to perform these key functions (previous page ):
Loading of all available dataseismic,
log, geologic and production datainto a
common data base. No mean feat this, as
we describe below;
User-defined display of data from up to
four wells at a time, enabling quality control checks and picking of geologic tops;
Postprocessing of seismic data to improve
the match to well data, and also to provide acoustic impedance sections and
attribute maps;
Combining of seismic and well data to
make the best correlation between wells
and create the building blocks of a stratified reservoir model;
Estimating reservoir engineering parameters in every layer anywhere in the model,
using powerful interpolation algorithms;
Constructing the optimum reservoir
model for reserve estimation, using a
model builder that integrates previously
obtained model building blocks and
reservoir engineering parameters;
Performing sophisticated material balance
analysis and preparing for simulation
using Interas Eclipse simulator package.
Any combination of these functions can be
combined to answer specific reservoir problems. In every functionality, the RM package
is equipped to handle and include data from
deviated and horizontal wells. The latest
version of the RM system, currently in testing, helps predict average properties of producing layers in a reservoir given any hypothetical well trajectory.

July 1994

nQuality control for well data using well summary module. Stratigraphy information

(left) is manually input by the user. This is combined with log data and drilling and
geologic input (right), including core results, fossil indications, hydrocarbon shows and

geologic top information. All data can be output to hard copy at any scale. (From the
North Sea case study.)

Data Loading and Quality Control

Data loading is frequently underestimated in


both scope and importance. Since the RM
package obviates any need for messy data
translation between incompatible software
modules, it is to be expected that the initial
loading requires time and care. The array of
data in the oilfield is large. First, there are
geographic data pertinent to the fieldposition coordinates, lease lines, coastlines and
other landmarks. Then, there are huge processed 3D seismic data sets that can be
imported in either SEG-Y format or from
seismic interpretation workstations using a
Geoshare link (see Geophysical Interpretation: From Bits and Bytes to the Big Picture,
page 23 ). Interpreted seismic horizons and
seismic attributes can be imported through
American Standard Code for Information
Interchange (ASCII) loading or, again, using
a Geoshare link. Well data can be imported
from GeoQuests Finder, LogDB and multiwell (MWDB) data bases. Alternatively, log
data can be imported from Log Information
Standard (LIS) files or just ASCII files. Log
data should include, where available, production logs and wireline testing results.
Petrophysical interpretations can be input
from ASCII, LIS files or proprietary Schlumberger formats. Facies information, geologic
tops and surfaces, core data, all of this generally in ASCII format, can also be input to
the data base. Finally, well test results are
input manually.
For a field of 20 wells, data loading may
take about two weeks. Two more weeks

may then be required to perform quality


control on the entire data set. This may be
the most important stage of reservoir management. Data quality is crucial to all subsequent interpretation, and the difficulty in
achieving it sometimes comes as a rude
shock. The problem is that most data by
themselves look fine. It is only when they
are brought into juxtaposition with other
data types that inconsistencies, glaring or
subtle, become obvious. A few examples:
Seismic time is standardized to two-way
travel time, but some data may be referenced to one-way travel time and must be
corrected. Well depths can be referenced to
different surface datums. True vertical depth
may be confused with measured depth. The
polarity convention on surface seismic and
borehole seismic data may be opposite to
each other, requiring a flipping of one or the
other. In fact, mistakes may emerge almost
anywhere, and errors are spotted only by
review of all data together.
One quality-control tool in the RM package is the well summary module, in which
all available well data for a single well are
displayed versus depth and can therefore be
readily cross-checked (above ). Data may
2. See also an RM case study described in:
Ramli R, Nugroho SB, Bradfield J and Hansen S:
Reservoir Modeling in the Bunyu Tapa Gas Fieldan
Integrated Study, Proceedings Indonesian Petroleum
Association 22nd Annual Convention, October 1993.

53

nBase map display


showing deviated
well trajectories
from a single platform (wells have
been renamed
after minerals) and
position of a reservoir unit (the Oslo)
on the trajectories
with unit thickness
indicated. (From the

include a stratigraphic column, a lithologic


column, any amount of manually input geologic information such as core and fossil
descriptions, all wireline logging data and
images including testing data and production logs, and petrophysical interpretations.
Once the data are loaded, data access is
facilitated by reference on the workstation
screen to a base map of the field. This
shows geographical features and a plan
view of the trajectory of the wells. As the
interpretation ensues, this base map illustrates progressively more of the interpreted
features, for example the exact location of
geologic tops along a deviated well trajectory or a map (right ).

North Sea case study.)

Postprocessing Seismic Data

The interpretation path obviously depends


on the data available. For two of the three
fields considered here, there were excellent
3D seismic data. And in all three fields, there
was at least one well with a borehole seismic survey. The first goal in working with
seismic data is to ensure that the borehole
seismic and the surface seismic at the borehole trajectory look as similar as possible. If
that is achieved, then the surface seismic can
be tightly linked to events at the borehole
and subsequently used to correlate structure
and evaluate properties between wells. If no
borehole seismic data are available, an alternative is to use synthetic seismograms, computed from acoustic and density logs.
Differences in seismic data arise because
of difficulties in achieving a zero-phase
response, a preferred format for displaying
seismic results in which each peak on the
trace corresponds exactly to an acoustic
impedance contrast, and, by inference, geologic interface. Processing seismic data to
obtain a zero-phase response depends on

accurately assessing the signature of the


acoustic source. This is straightforward in
borehole seismics because the measured
signal can be split into its downgoing and
upgoing components, and the former yields
the source signature. In surface seismics, the
downgoing field is unmeasured and statistical techniques must be used to assess the
signature, leading to less reliable results.

In Conoco Indonesia, Inc.s field, the surface seismic and borehole seismic data initially matched poorly (next page). With the
residual processing module, the mismatch is
resolved by comparing the frequency spectra of the two data sets and designing a filter
to pull the surface seismic data into line
with the borehole seismic data.3 In this case,
the postmatch alignment is excellent. How-

3. Gir R, Pajot D and Des Ligneris S: VSP Guided


Reprocessing and Inversion of Surface Seismic Data,
paper OSEA 88105, presented at the 7th Offshore
South East Asia Conference, Singapore, February 2-5,
1988.
Schultz P, Kordula J, Lawyer L, Metrailer F, Nestvold
W, Raikes S and Nguyen T: Integrating Borehole and
Seismic Data, Oilfield Review 3, no. 3 (July 1991):
36-45.

nAcoustic impedance section, obtained using the inversion module, with superim-

posed well log acoustic impedance. Tools needed for the job include the low-frequency
trend of acoustic impedance derived from well logs (left track, top right), and frequency
spectra of both seismic and log data. (From the Conoco Indonesia, Inc. case study.)

54

Oilfield Review

After match
Before match
Crosscorrelation

Surface
seismic

Borehole
seismic

Surface
seismic

Crosscorrelation

Time, msec

ever, if the alignment resulting from this


treatment remains poor, it may prove necessary to vary the design of the filter versus
two-way time. This is achieved by constructing filters for several specific time
intervals along the well trajectory and then
interpolating between them to obtain a representative filter at any two-way time.
The next step is to perform a seismic
inversion on the matched seismic data,
using the inversion module. This process
converts the matched seismic data to acoustic impedance, defined as the product of
rock density and acoustic velocity. Acoustic
impedance can be used to classify lithology
and fluid type. Mapped across a section in
two dimensions or throughout space in three
dimensions, acoustic impedance provides a
valuable stratigraphic correlation tool. For
Conoco Indonesia, Inc., inversion provided
valuable insight into the lateral extent of the
reservoir (previous page, bottom ).
The inversion computation requires the
full spectrum of acoustic frequencies. Very
low frequencies are missing from the seismic
record, so these are estimated interactively
from acoustic well logs. Between wells, this
low-frequency information is interpolated,
and for a 3D inversion, the information must
be mapped everywhere in reservoir space.
Inversion results can be output from the RM
package to a seismic interpretation workstation for detailed horizon picking.
The residual processing and inversion
modules prepare the seismic data for the
next step, correlation.

nTop: Crosscorrelation between borehole seismic and surface seismic data before and
after matching, using the residual processing module. Matching the surface seismic
required a time shift of 9 milliseconds and a phase rotation of 90 degrees. Note that
after matching, the crosscorrelation function and its envelope are symmetric about zero
time. Bottom: Before and after match comparison between surface and borehole seismic data. Note the excellent alignment between the two data sets after matching.
(From the Conoco Indonesia, Inc. case study.)

July 1994

55

Correlating Seismic and Well Data

Correlation is performed in several stages.


The first is establishing geologic tops on
each well using the detailed correlation
module. With individual well data dis-

played for up to four wells simultaneously,


the interpreter can correlate horizons from
one well to the next, registering consistent
geologic tops in every well across the field
(below ). All well data have the potential to

nDisplays of up to four wells using the detailed correlation module for picking and cor-

relating geologic tops across the field. Here, each example displays different types of
data and speaks to a different specialist: to the geophysicist (top left), to the geologist
(top right), to the reservoir engineer (bottom). This diversity encourages integration of all
available data and permits addressing different stages of a fields life. (From the North
Sea case study.)

aid in this process, with core information,


petrophysical log interpretations, wireline
testing results and production logs equally
able to contribute to identifying significant
geologic horizons.
The next step signals the beginning of the
merging of seismic and well data. In the
WellTie module, the 3D seismic trace at a
given well is displayed versus two-way time
alongside all pertinent well data, which are
already converted to time using borehole
seismic or check-shot data (bottom left ). The
main purpose of this combined display is to
tie events recognized on the seismic
traceseismic horizonsto the recently
established geologic tops found from the
well data. These ties, or links, between the
two data types are crucial at several subsequent stages during the construction of the
reservoir model. In addition, seismic markers found at this stage can be transferred to a
seismic interpretation workstation for horizon tracking.
The first use of the tie, or link, between
seismic and well data is in the velocity
mapping module that enables the 3D seismic record versus time to be converted to a
record versus depth. This crucial step subsequently allows the seismic data to guide the
mapping of geologic horizons between
wells. In the case of Conoco Indonesia,
Inc.s case study, seismic depth maps
including fault positions were already available from previous interpretation work and
were imported directly into the RM package. When depth maps are not available,
however, they can be obtained using the
RM package, as follows:
A velocity map for each layer is first
assessed from the stacking velocities used in
the 3D seismic processing. These are average velocities to the depth in question and
must be converted to interval velocities
using Dixs formula.4 The interpreter then
maps these velocities for a given horizon,
using one of four available algorithms
including the sophisticated kriging tech-

4. For a review of Dixs formula:


Sheriff RE and Geldhart LP: Exploration Seismology
Volume 2: Data-Processing and Interpretation. Cambridge, England: Cambridge University Press, 1987.
5. For an explanation of kriging:
Isaaks EH and Srivastava RM: An Introduction to
Applied Geostatistics. Oxford, England: Oxford University Press, 1989.

nDisplay from the WellTie module, in which seismic horizons are tied to geologic tops.
(From the Conoco Indonesia, Inc. case study.)

56

Oilfield Review

nique, and reviews their appearance in plan


view.5 Gradual changes in velocity are normal, but anomalies such as bulls-eye
effectsisolated highs or lowsthat are
geologically unacceptable can be edited out.
Next, values of velocity at the intersections of horizons with wells are compared
with velocity values obtained from acoustic
log or borehole seismic data. The differences, determined in all wells, are also
mapped and then used to correct the original velocity map. Finally, the corrected
velocity map is used to convert the 3D seismic record to depth. To check the result,
structural dip azimuth as estimated from
dipmeter logs can be superimposed on the
resulting mapstructural dip azimuth
should follow the line of greatest slope as
indicated by the map (above, right ).
With seismic data converted to depth, the
interpreter can begin building a stratified
model of the reservoir using the correlation
module. First, seismic data acting as a guide
allow geologic tops in one well to be firmly
correlated with tops in adjacent wells (right).
This display may be further enhanced by
superimposing dipmeter stick plots and
other forms of dipmeter interpretation along
the well trajectories. Another display that
shifts data to an arbitrary datum, generally
an already correlated horizon, provides a
stratigraphic perspective (below ). Second,
each geologic correlation is allocated
descriptors that determine how it relates geometrically to its neighbors above and below.
These descriptors are later used to build up
the actual reservoir model. Third, all available information about reservoir compartmentalizationfor example, saturation interpretations from well logs and wireline testing
resultsare used to identify flow barriers,
such as a sealing faults, so the reservoir can
be divided into a set of isolated volumes
called tanks, essential for correctly estimating reserves.
Sometimes, the interpreter may want to
manually dictate the geometry of a horizon

nStructural dip
azimuth plots (white)
superimposed on a
map derived from
velocity mapping
and time-to-depth
conversion. In this
example, the trends
of the plots do not
follow lines of greatest slope indicated
on the map. Some
interactive editing
of the map may
therefore be
required. (From the
North Sea case study.)

Structural View

Stratigraphic View

nTwo displays available in the correla-

tion module. Top: A structural view


shown in true depth with each wells
ELAN Elemental Log Analysis petrophysical interpretation superimposed on the
depth-converted seismic section. This
provides a key visual comparison
between seismic and well data. A small
map (upper right) shows the zig-zag
course of the displayed section. Bottom:
A stratigraphic view referenced in depth
to a specific marker, used to check geologic tops from well to well. (From the
Conoco Indonesia, Inc. case study.)

July 1994

57

nBuilding a geologic model manually using the section modeling module. A wide variety of colors and patterns are available to visualize the construction. (From the North Sea

or other featuresuch as a fault, bar, channel, etc.rather than let it be guided by


established horizons on the 3D seismic
data. This can be accomplished using the
section modeling module, which offers an
array of graphic tools to create and edit elements of the reservoir model in vertical section (left ). The same elements must be created in every other vertical section
containing wells and then sampled on a regular grid of points in preparation for model
building. This labor-intensive manual creation of a reservoir model becomes mandatory when there are no seismic data or only
sparse 2D data.
One source of data that may contribute to
the definition of tanks and faults is the well
test. Well tests give an approximation of
tank size and, in particular, provide distance
estimates from the well to sealing faults.
Azimuth to the fault, however, is undetermined. In the Geopulse module, the RM
system permits viewing well test results in a
plan view and includes the ability to rotate
a sealing fault to see if it can be aligned with
faults already established from seismic interpretation or by using the section modeling
module (below ).

case study.)

nViewing well test results on a reservoir base map. The well test indicated a sealing fault at a certain distance
from a well, but cannot indicate its azimuth (left). Using the Geopulse module, interpreters could rotate the sealing fault until it coincided with a fault already established in the model (right). (From the North Sea case study.)

58

Oilfield Review

Obtaining Reservoir Engineering


Parameters in Each Layer

Once the reservoir geometry has been


defined, if not actually computed, one step
remains before synthesizing the complete
reservoir model. This is the estimation of key
reservoir engineering parameters in each
defined interval across the areal extent of the
reservoir. Key parameters are net thickness,
porosity, oil, gas and water saturations, and
horizontal and vertical permeabilities. The
computation proceeds in two stages.
First, in each well, the parameters must be
averaged for each interval from the petrophysical interpretations. This is performed in
the component property module and relies
on careful selection of cutoffs to exclude
sections of formation that do not contribute
to fluid movement (below ). Choice of cut-

nPrinciple of seismic-guided log property


mapping. A correlation between seismic
attributes at the well intersections and the
logging parameter must first be established. The correlation is then used to
map the logging parameter throughout
the seismic cube.

nChoosing cutoffs for the averaging of interval parameters, as displayed in the component property module. Top: tables of averaged parameters for various intervals (called
after major cities) and wells (called after gemstones) for a particular cutoff selection:
red means average value is fixed, blue means it can be edited. Bottom left: optimization plot to help select cutoffs. Bottom right: cutoff sensitivity plots to show percentage
net thickness versus chosen cutoff parameters. (From the North Sea case study.)

July 1994

offs is made with the help of sensitivity plots


showing how the averaged parameter varies
with cutoff value, and preferably in a well
with well-test data to validate the cutoff
choices. The effect of adjusting cutoffs can
be observed in real time on the workstation
and for all wells simultaneously.
Second, the averaged parameters for each
interval must be gridded or mapped across
the reservoir. In the log property mapping
module, the RM package brings into play
powerful algorithms that use seismic data to
guide the mapping. The key to the method
is establishing a relationship at the wells
between some attribute of the seismic data
and a combination of the averaged well
parameters, and then using the relationship
to interpolate the averaged parameter everywhere in the reservoir (left ). The seismic
attribute could be amplitude, or acoustic
impedance calculated earlier using the
inversion module, or one of several
attributes that are routinely calculated on
seismic interpretation workstations and then
imported to the RM system, or simply
depthfor example, saturation is often
related to depth.
The relationship may be linearthat is,
the combination of averaged parameters is
defined as a simple weighted sum of seismic
attributesor nonlinear, in which an elaborate neural network approach juggles several
linear relationships at the same time, picking
the best one for given input.6 Linear relationships easily handle smooth dependencies
such as between acoustic impedance and
porosity. The nonlinear approach is required
for averaged parameters, such as saturations,
that may vary abruptly across a field.
In practice, the log property mapping
module guides the interpreter through the
6. Schultz PS, Ronen S, Hattori M and Corbett C: Seismic-Guided Estimation of Log Properties. Part 1: A
Data-driven Interpretation Methodology, The Leading Edge 13, no. 5 (May 1994): 305-310, 315.
Ronen S, Schultz PS, Hattori M and Corbett C: Seismic-Guided Estimation of Log Properties. Part 2:
Using Artificial Neural Networks for Nonlinear
Attribute Calibration, The Leading Edge 13, no. 6
(June 1994): 674-678.
Schultz PS, Ronen S, Hattori M, Mantran P and Corbett C: Seismic-Guided Estimation of Log Properties.
Part 3: A Controlled Study, The Leading Edge 13, no.
7 (July 1994): 770-776.

59

Vclay

Conoco Indonesia Inc.

essential stages: choosing the interval to


map, comparing seismic data at the well
intersections with the averaged well data,
establishing relationships that show a good
degree of correlation and then proceeding
with the mapping. The advantage of log
property mapping over conventional mapping was demonstrated in both the Conoco
Indonesia, Inc. and Pertamina Sumbagut
case studies (below ). Research continues
into finding ways of using all available data
to assist the mapping of log data across the
reservoir (see Inversion for Reservoir Characterization, page 62).

Building the Reservoir Model and


Estimating Reserves

The stage is set for the RM packages most


powerful functionalitythe ModelBuilder
module. This module fully characterizes the
reservoir by integrating the geometric interpretation established with the correlation
and section modeling modules, including
definitions of reservoir tanks and fluid levels, with the reservoir engineering parameters established using the component property and log property mapping modules.
The main task is constructing the exact
shape of the reservoir layers. This is

Pertamina Sumbagut

nSeismic-guided log property mapping results. For Conoco Indonesia, Inc., seismic-derived acoustic impedance averaged over 60
milliseconds was used to map averaged clay percentage (top).
A map of clay distribution for one of the reservoir intervals derived
from well data alone (bottom) is enhanced when seismic data are
used as a guide (middle). The latter clearly shows clean sands to
the right (yellow and orange) and dirty sands to the left (blue) for
that interval. The reservoir is bounded on the south by a major
thrust fault.

60

nFor Pertamina Sumbagut, averaged seismic amplitude for


each interval was used to map net-to-gross ratio. The map
derived from logs alone (right) is clearly inferior to the seismicguided map. The latter shows a channel (blue) and tidal inlets
that are consistent with the fault.

Oilfield Review

July 1994

Sequence # 2

Unconformity, truncate below

Sequential, conformable
Sequential, conformable

Special, bar
Sequential,
conformable

Unconformity, truncate
below, conformable
above
Sequence # 1

achieved by starting at a bottom reference


horizon and building up younger layers
according to their assigned descriptors,
mimicking the actual processes of deposition and erosion (right ). For example, if a
layer top has been defined as sequential and
conformable, it will be constructed roughly
parallel to the layers bottom horizon. If a
reference horizon has been described as an
unconformity, then underlying layers can
approach it at any angle, while layers above
can be constrained to track roughly parallel.
The areal bounds on layers are determined within the ModelBuilder module by
several factors. First, specific geometries can
be imported. Second, areal bounds may be
implied through the geometries created with
the section modeling module. Third, the
contours of petrophysical parameters established during log property mapping can
establish areal limits. Fourth, thickness maps
of layers can be interactively created and
edited prior to model building.
The key dividend of model building is the
establishment of reserve estimates for each
tank. Oil in place, total pore volume, netpay pore volume, water volumes, reservoir
bulk volume, net-pay area and net-pay bulk
thickness are some of the parameters that
can be calculated and tabulated on the
workstation. Conoco Indonesia Inc.s estimates using the RM package were in close
agreement with standard calculation procedures. During appraisal, when the oil company decides whether to proceed to development, establishing reserve estimates is
crucial. As a result, the many steps leading
to this moment will be reexamined and
almost certainly rerun to assess different
assumptions about the reservoir (right ).
A benefit of the RM package is that rerunning interpretations and testing new assumptions are not only straightforward but automatically monitored by the systems version
manager. The manager keeps track of concurrent or parallel interpretation paths. The
initial version is created when data are
loaded, and becomes the top member of a

Sequential,
conformable

nPrinciple behind
the ModelBuilder
module. Shapes
are assigned to
layers with reference to mapped
horizons above or
below. Then, the
model is built up
from the bottom
following simple
rules of deposition
and erosion.

Special, channel

Sequential, reference

nOne result of model buildingporosity mapped according to a seismic surface versus


depth. This 3D perspective was obtained by transferring the ModelBuilder results to a
seismic workstation and viewing them with GeoViz software.

61

Inversion for Reservoir Characterization


Alberto Malinverno

Ridgefield, Connecticut, USA

Log data

Fundamental to reservoir characterization is


assigning physical property values everywhere
within the reservoir volume. The challenge of

Model

using all available data to choose the best assignment is being addressed by a group of scientists
at Schlumberger-Doll Research in Ridgefield,
Connecticut, USA. Available data could include
seismic data, log data, well test results, knowledge of the statistical distributions of the sizes
and orientations of sedimentary bodies, and even
specific information about reservoir geometry.
To incorporate all these diverse sources of
information, the scientists use an inversion
Seismic data

method that begins by considering all possible


assignments.1 Each assignment is represented by
a single point in a multidimensional space that
has as many dimensions as there are cells in the
reservoir model. In assigning acoustic impedance
in a reservoir model comprising 10 10 10 discrete cells, for example, each assignment would
be represented by a unique point in a 1000-dimensional space.
The available data are then used to determine

sional space. A procedure to choose a single,

which of these points are acceptable. This is

best assignment is therefore required. The cur-

achieved by representing each available data

rent method starts with an initial guess and then

set3D surface seismic data, well data, or what-

modifies it as little as possible until the intersec-

everby a cloud of points corresponding to

tion set is reached.

assignments that fit that particular data set. Find-

A synthetic example illustrates the method.

ing an acceptable assignment then reduces to

First, a reservoir model is constructed using 21

finding a point that lies at the intersection of all

21 horizontal cells and 201 vertical cells, with an

such clouds of points.

acoustic impedance value assigned to each cell

As the solution is always nonunique (more than

(right). This synthetic model is equivalent to a

one assignment satisfies all the available infor-

volume of about 1 km 1 km [0.6 miles 0.6

mation), this intersection set will not be a single

miles] horizontally and 100 milliseconds (about

point but have some volume in the multidimen-

200 m) vertically. From this are generated two


data sets that would be measured if the reservoir
were real: first, a log of acoustic impedance in a
well through the center of the model; second, the

nLeft: Reservoir model comprising 21 21 horizontal

201 vertical cells, each assigned a value for acous-

tic impedance, scaled from blue to red.


Middle: Simulated acoustic impedance log (top)
and 3D surface seismic data obtained in the reservoir model.
Right: Simple extrapolation of log data only (top), and
inversion using both log and surface seismic data.

surface seismic response, which displays a lower


spatial resolution than the original model.
The challenge is to reconstruct the original
acoustic impedance model using the log and

62

Oilfield Review

Simple extrapolation

Inversion

tree whose branches represent later interpretations. The tree is displayed on the workstation screen and work can be started at any
branch with a double-click on the mouse
(below ). Several people can work on the
data set simultaneously; the original data are
never corrupted or lost; and the history of
the interpretation is automatically recorded.
Say, for example, a geologist is working
on correlating logs and creating geologic
tops, while the geophysicist is preparing an
inversion to obtain acoustic impedance. If
both want to work concurrently, the version
manager simply grows two branches. At the
end of the day, both branches are saved and
both specialists can pick up where they left
off in the morning by double-clicking with
the mouse on their respective branches.
Similarly, a reservoir engineer may wish
to try several scenarios for mapping the distribution of porosity within a layersay by
mapping well log values only and alternatively by using seismics to guide the mapping with the log property mapping module.
Two versions can be made in parallel with a
branch for each scenario. Several further
steps along each interpretation path may be
necessary before it becomes clear which
mapping technique is better. The final interpretation proceeds from the end of the successful branch.

Material Balance Analysis and


Preparation for Simulation

For reservoir managers striving to improve


the performance of developed fieldsfor
example, investigating placement of new
wells or reconfiguring existing producers
and injectors to improve drainagethe RM
package has two more modules to offer.
One provides a sophisticated material balance analysis that assesses whether the
established reservoir model is compatible
with historical production data. The second
converts the reservoir model into a format
suitable for simulating reservoir behavior
and predicting future production.7
Material balance analysis is performed
using the Fortress Formation Reservoir Test
System module. In traditional material balance analysis, reservoir volume is estimated
by noting how reservoir pressure decreases
as fluids are produced. The more fluids produced, the greater the expected pressure
decrease. Exactly how much depends on
the compressibility of the fluids, which can
be determined experimentally from downhole samples through pressure-volume-temperature (PVT) analysis, the compressibility
7. For a simulation case study, see:
Bunn G, Minh CC, Roestenburg J and Wittmann M:
Indonesias Jene Field: A Reservoir Simulation Case
Study, Oilfield Review 1, no. 2 (July 1989): 4-14.

seismic data only. A reasonable starting model


can be obtained from a simple extrapolation of
the well log data. This clearly fails to reproduce
structural variations away from the well that
appear in the original model. However, modifications to this first guess using in addition the surface seismic data produces a reconstruction that
is much closer.
Scientists at Schlumberger-Doll Research
anticipate that this method will adapt readily to a
wide variety of input data and provide a much
sought after generalized approach to constrain
the assignment process.
1. Han S-P: A Successive Projection Method, Mathematical Programming 40 (1988): 1-14.
Combettes PL: Signal Recovery by Best Feasible Approximation, IEEE Transactions on Image Processing 2
(1993): 269-271.

nA view of the RM systems version manager, showing the tree structure of a projects
progress. After step 61, Seismic Guided LPM of SW, four activities were simultaneously
launched. Each is preserved and can be developed independently. (From the North Sea
case study.)

July 1994

63

of the rock, which can be determined from


core samples in the lab, and, of course,
reservoir volume. Faster declines in pressure
than expected from such an analysis might
indicate a smaller reservoir than first
thought. Slower declines might indicate a
high-volume aquifer driving production or,
less rarely, connected and as yet undiscovered extensions to the reservoir. This traditional analysis of reservoir size and drive
mechanism requires no a priori knowledge
of reservoir geometry, only production, pressure and PVT data.
The Fortress module uses these basic principles of material balance, but applies them
within the geometrically defined reservoir
tanks of the established reservoir model.8
This allows not only verification of tank volumes, but also estimation of fluid communication between tanks (right ). Communication between tanks could be due to an
intervening low-permeability bed or a fault
being only partially sealing. Another result is
the prediction of how fluid contacts are
moving. In a sense, the Fortress module
allows the interpreter to ensure that the
reservoir model established by the RM
package is compatible with known production, pressure and PVT data obtained in the
field. This is a minimum requirement before
embarking on the more laborious and
expensive business of a reservoir simulation.
The last RM module is the MeshBuilder
module that prepares the reservoir model
for input to a simulator (right ). This proceeds in two steps. First, an areal mesh is
constructed covering the part of the reservoir to be simulated. The meshs perimeter
may be irregular to honor the typically irregular bounds of a reservoir. Second, this
mesh is pushed down through the layers of
the reservoir, like a cookie cutter, to create
thousands of small cells required for the
simulator. Each cell is characterized by the
reservoir engineering parameters established
during model-building. The output of the
MeshBuilder module can be input directly
to Interas Eclipse simulator.
In todays tough business climate, characterizing reservoirs is a key activity for oil
companies, and its complexities demand
the power of high-speed workstations. The
RM package represents an option that favors
the approach of integrated interpretation, in
which each specialist provides continuous

64

nDisplays from the RM packages Fortress module that uses material balance compu-

tations in each of the models defined tanks to monitor fluid movements throughout the
reservoir. Top right: an oil-water contact is shown moving from solid purple contour to
dashed purple contour. (From the North Sea case study.)

nTypical cell
geometries for simulation. The cells
are constructed by
taking a grid and
slicing it through
the reservoir model
like a cookie cutter.

input, and not, by tradition, just when its


his or her turn. Like most other workstation
packages, the RM system is being continuously evolved to improve its functionality
and provide new leading-edge tools. The
likely winner in this workstation game will
combine integrated functionality with ease
of use.
HE

8. Ehlig-Economides CA: Application of Multiphase


Compartmentalized Material Balance, paper SPE
27999, to be presented at the SPE/University of Tulsa
Centennial Petroleum Engineering Conference, Tulsa,
Oklahoma, USA, August 29-31, 1994.

Oilfield Review

You might also like