You are on page 1of 36

OpendTect Workflows Documentation version 4.

0
dGB Earth Sciences
Copyright 2002-2009 by dGB Beheer B.V.

All rights reserved. No part of this publication may be reproduced and/or published by print, photo
print, microfilm or any other means without the written consent of dGB Beheer B.V. Under the
terms and conditions of either of the licenses holders are permitted to make hardcopies for internal
use: GNU GPL Commercial agreement Academic agreement
Table of Contents
1. Preface
1.1. Basic manipulation
1.2. Start a New Project
2. Attributes
2.1. Evaluate attributes
2.2. Dip-steering - Background vs Detailed
2.3. Spectral Decomposition
3. Filters
3.1. Dip-steered diffusion
3.2. Dip-steered median filter
3.3. Fault enhancement filter
3.4. Ridge enhancement filter
3.5. Spectral Blueing
4. Inversion and Rock Property Prediction
4.1. Colored Inversion
4.2. MPSI Stochastic Inversion
4.3. Neural Network Rock Property Prediction
5. Object detection
5.1. Common Contour Binning
5.2. Chimney Cube
5.3. Fault Cube
5.4. Fingerprint
5.5. UVQ waveform segmentation
6. Sequence Stratigraphy
6.1. Chrono-stratigraphy
6.2. Wheeler Transformation
6.3. Stratal Slicing
6.4. Systems Tracts Interpretation
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 1. Preface
Table of Contents
1.1 Basic manipulation
1.2 Start a New Project



This document describes various work flows in OpendTect + plugins. We describe the purpose, what
software is needed (OpendTect only or commercial plugins), and how to do it. The workflows are
given in the form of a bullet list describing the sequential steps to be taken. Where possible, links to
the User doc and the OpendTect and dGB websites are given with further information. For the latter
links to work, you need access to the Internet.
1.1. Basic manipulation
Purpose: To load (calculate), move and display seismic information.
Theory: OpendTect is a system for interactive data analysis. Seismic data is either retrieved from
files stored on disk or data is calculated on-the-fly. You only retrieve or calculate what is needed at
user-specified locations. All elements (inlines, crosslines, time-slices, random lines, sub-volumes, 2D
seismic, horizons, faults (v3.2+), picksets, wells, bodies, annotations) are controlled from the tree.
Only data that is present in the tree is currently residing in memory. In the scene, there are two basic
modes of operation: View mode is for rotating, zooming and panning. Interact mode is for moving
and resizing elements. Toggle between the modes by pressing the escape button on the keyboard or
clicking the hand or arrow icon.
Software: OpendTect

Workflow:
1. To view a seismic inline from a 3D volume: Right-click on the Inline entry in the tree.
2. Right-click on the new element and Select Attribute -> Stored data -> your seismic file (to do
this you must already have imported seismic data into OpendTect).
3. To position the data either: 1) right click on the element -> Position, 2) fill in the line number
in the position field (upper left corner of the UI, or 3) switch to interact mode (arrow icon) use
the green anchors to resize, left click and drag to move and shift left click and drag to pan. Use
right-click to reset accidental moves.
4. To change the view switch to view mode (hand icon). To rotate: left click and drag; to zoom:
middle mouse wheel; to pan: middle mouse button click and drag.
Tips:
1. Mouse button settings can be changed under Utilities -> Settings -> Mouse controls.
2. Controlled zoom in view mode: press key board S and left click in the scene on the position to
zoom to.
For more info, see this Tutorial video:

Basic interaction (flash video)
Prev Home Next
OpendTect Workflows
Documentation version 4.0
Start a New Project
OpendTect Workflows Documentation version 4.0
Prev
Chapter 1. Preface
Next
1.2. Start a New Project
Purpose: Set up the structure for a new OpendTect project.
Theory: OpendTect stores data in flat files organized in directories and managed by OpendTect's file
management system. The root directory for projects must be created by OpendTect. Usually a root
directory is created at installation time. Alternatively, a new root directory can be created when you
define the new project. For each project OpendTect requires the survey boundaries and the
transformation between inline, crossline numbers and X,Y rectangular co-ordinates to be specified.
OpendTect projects can be set up for only 3D seismic, only 2D seismic and for 2D+3D seismic.
OpendTect does not require 2D lines to lie inside the survey definition boundaries.
Software: OpendTect

Workflow:
1. Press the Survey Selection icon and press New to create a new project.
2. Specify the survey directory name and the type of project (2D, 3D, of both 2D and 3D). Now
specify the boundaries and inline, crossline to X,Y transformation. This can be done in
different ways: 1) scan a 3D seismic data set, 2) specify 3 corner points (2 on the same inline)
or 3) specify the transformation.
3. Select the new Survey and leave the Survey selection window (press OK).
4. Import Seismic data (Survey - Import - Seismic - SEG-Y - 3D).
5. To view the imported seismic data: right-click on inline in the tree, right-click on the new
element in the tree, and Select Attribute - Stored Cubes - your seismic file.
Tips:
1. If you have a license for Workstation Access you can get the survey settings for a new
OpendTect project directly from a seismic volume in a Seisworks/OpenWorks or Geoframe-
IESX project. After setting up the project you then import seismic (horizons, wells) from these
data stores with Workstation Access.
For more info, see this Tutorial video:

Start new project (flash video)
Prev Home Next
Basic manipulation Up Attributes
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 2. Attributes
Table of Contents
2.1 Evaluate attributes
2.2 Dip-steering - Background vs Detailed
2.3 Spectral Decomposition



Attribute analysis is one of the key functionalities in OpendTect. Attributes are used for two purposes:
1. Visualization: you create a new view of the data that may lead to new geologic insight.
2. Integration of information: combine attributes, e.g. using neural networks (neural networks
plugin needed).
To use attributes, you must define an attribute set (Processing menu - Attributes, or press the
Attributes icon). To start: either create your own attribute set, or select one of the default sets. It is
possible to calculate attributes from attributes and to use mathematics and logics (if .. then .. else ..) to
create your own attributes. The sky is the limit! The attributes defined in this window are the recipees
for making the calculations. You can either use these recipees to create new attribute volumes
(Processing menu - Create seismic output), or for quick on-the-fly calculations (right mouse click on
the element in the tree). The system only calculates what is needed for creating a display. For an
optimal use of the system you are advised to limit on-the-fly calculation of attributes and evaluation
of parameter settings to small areas (part of an) inline, crossline, time-slice, 3D sub-volume, random
line, or 2D line). Inline calculations are in general the fastest. Processing of large volumes needs time
and is best done in batch mode so you can retrieve from stored data afterward.

Attributes are an integral part of OpendTect. If you also have Dip-Steering plugin you can improve
multi-trace attributes by extracting the information along a virtual horizon. This is called dip-steering
and is supported for 2D and 3D seismic. In addition you will have a number of extra attributes: dip,
azimuth, curvatures.
2.1. Evaluate attributes
Purpose: Find the best parameter setting for a particular attribute.
Theory: Use visual inspection, common sense, experience, and seismic knowledge to evaluate
attributes that are appropriate for what you are studying.
Software: OpendTect

Workflow:
1. Add an element to the tree, resize and position it to where you want to do the evaluation.
2. Open the Attribute Set window.
3. Add an Attribute to the list (e.g. Energy).
4. Press the Evaluate Attribute icon.
5. Specify the parameter to evaluate (Energy only has one: Timegate).
6. Specify initial value, increment and number of slices (e.g. [0,0],[-4,4] and 10, results in 10
energy calculations with time-gates: [0,0], [-4,4], [-8,8] ...[-36,36]).
7. Press Calculate. Note that the calculation will be done on the active element in the tree!
8. Use the slider to movie-style inspect the results and select the one you like best.
Tips:
1. To reduce calculation times reduce the size of the element. Inlines are generally faster than
cross-lines, which are faster than time-slices.
2. When you add the element in the tree Add a second attribute to the element. Put the seismic in
the first container and use the second container for the attribute evaluation. You can now
switch element 2 (the top one) on/off to compare the attribute response with the seismic
display.
For more info, see this Tutorial video:

Evaluate attributes (flash video)
Prev Home Next
Start a New Project Dip-steering - Background vs
Detailed
OpendTect Workflows Documentation version 4.0
Prev
Chapter 2. Attributes
Next
2.2. Dip-steering - Background vs Detailed
Purpose: Calculate multi-trace attributes on 2D or 3D seismic by extracting data along seismic
events, filter along seismic events, auto-track chrono-stratigraphic horizons (SSIS plugin), and to
compute attributes based on local dip-azimuth information only.
Theory: If we know, at each position in a 3D volume, the direction (dip-azimuth) of the seismic
events we can use this information to create a local horizon at each position in the cube simply by
following the dip and azimuth information from the center outwards. We call this process dip-
steering. It requires a pre-calculated steering cube and is used to compute better (dip-steered)
attributes and to filter the data. In addition, the steering cube can be used to create various attributes
that are purely based on dip-azimuth information. For example curvature attributes but also the dip in
the inline and crossline direction. Lastly the dip-steering process is used to auto-track chrono-
stratigrpahic horizons in the SSIS plugin. We often use two versions of the steering cube: detailed
and background. Detailed steering cube is the cube that is computed with the default settings. It is
primarily used for attribute calculations. The background steering cube is a heavily smoothed (e.g.
median filtered 4x4x4) version of the detailed steering cube. Its main use is in dip-steered filtering of
seismic data.
Software: OpendTect + Dip-Steering (+ SSIS)

Workflow:
1. Create a Detailed Steering Cube (Processing - Steering - Create, e.g. FFT 3x3x3).
2. Create a Background Steering Cube (Processing - Steering - Filter, e.g. 4x4x4).
3. Open the Attribute Set Window and add new attributes from the Steering Cube (Dip,
Curvature), or that use Steering - Full option where applicable. Examples: Similarity, Volume
Statistics (median filter).
4. Apply the attributes to the seismic data in batch: Processing - Create seismic output, or on-the-
fly: right-click on the element in the tree (e.g. part of a time-slice).
Tips:
1. To QC the Steering Cubes: create a Crossline dip attribute (Dip attribute) from the Detailed
Steering Cube and from the Background Steering Cube and display both on an inline.
See also chapter 3.2 - dip-steered median filter
Prev Home Next
Evaluate attributes Up Spectral Decomposition
OpendTect Workflows Documentation version 4.0
Prev
Chapter 2. Attributes
Next
2.3. Spectral Decomposition
Purpose: Analyze seismic data at different vertical scales. Typically applied to horizons to visualize
thickness variations at sub-seismic resolution.
Theory: When the thickness of an acoustic layer falls in the tuning range the seismic amplitude
increases due to interference of top and bottom reflections. If you decompose the seismic response
into its frequency components (or into wavelet scales) the largest amplitude corresponds to the layer's
tuning range, hence, to its thickness. In this way, it is possible to interpret thickness variations at sub-
seismic resolution by decomposing an interval around a mapped horizon.
Software: OpendTect

Workflow:
1. Add an element (horizon) to the tree.
2. Open the Attribute Set window and add Spectral Decomposition to the list.
3. Press the Evaluate Attribute icon to calculate the individual components (frequencies or
wavelet scales).
4. Use the slider to movie-style inspect the components. Pressing Accept updates the Attribute
definition to calculate this component only.
5. To use individual components for further work (e.g. to create a cube for frequency 10Hz) you
must define each component as a separate attribute in the list.
Tips:
1. Spectral Decomposition applied to a horizon takes time. Use the "Store slices on Accept"
option to save all components as Surface data belonging to that Horizon for later use. To scroll
through Surface data use the Pg Up and Pg down keys.
2. Color blended display:
RGBA blending attribute display is used to create a normalized color-blended display that
often show features with greater clarity and enhances a detail map view. The user can blend
the iso-frequency responses (Spectral Decomposition). For more information go to RGB
display
Prev Home Next
Dip-steering - Background vs
Detailed
Up Filters
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 3. Filters
Table of Contents
3.1 Dip-steered diffusion
3.2 Dip-steered median filter
3.3 Fault enhancement filter
3.4 Ridge enhancement filter
3.5 Spectral Blueing



Filters in OpendTect are implemented as Attributes that need to be defined in the attribute set
window. Filters with a user interface are grouped under Filters. This group includes convolution,
frequency filters, gap decon, and velocity fan filters. Filters without a user interface are filters that are
constructed from (chains of) attributes. For example, using Reference Shift, Volume Statistics, and
Mathematics, you can extract and manipulate data to construct your own filters. This group of filters
contains, among others: dip-steered median filter, dip-steered diffusion filter, fault enhancement
filter, and ridge-enhancement filter. A number of these filters are delivered with the system as default
attribute sets.
3.1. Dip-steered diffusion
Purpose: Sharpen edges (faults) by filtering along the structural dip.
Theory: In diffusion filtering, you evaluate the quality of the seismic data in a dip-steered circle. The
center amplitude is replaced by the amplitude where the quality is deemed best. In the vicinity of a
fault, the effect is that good quality seismic data is moved from either side of the fault in the direction
of the fault plane, hence the fault becomes sharper.
Software: OpendTect + Dip-Steering

Workflow:
1. Open the attribute set window and open the default set called: Dip-steered diffusion filter.
Select seismic and steering cube.
2. Use Evaluate attribute and evaluate the "Stepout" (radius) of the Position attribute on a
(small) test element, e.g. part of an inline.
3. Apply the Diffusion filter to the seismic data in batch: Processing - Create seismic output, or
on-the-fly: right-click on the element in the tree.
Tips:
1. Dip-steered filtering works best when you use a heavily smoothed steering cube (background
steering). Smoothing of the steering cube is done in Processing - Steering - Filter. Use e.g. a
median filter 4x4x4 to create the background steering cube.
2. Calculate Similarity (or Coherency) on dip-steered diffusion filtered seismic data if you need
to see sharp faults.
3. Dip-steered diffusion filtering produces unwanted circular patterns in amplitude horizon
slices. To reduce this effect, combine dip-steered diffusion filtering with dip-steered median
filtering. This is explained in the Fault enhancement filtering work flow.
Prev Home Next
Spectral Decomposition Dip-steered median filter
OpendTect Workflows Documentation version 4.0
Prev
Chapter 3. Filters
Next
3.2. Dip-steered median filter
Purpose: Remove random noise and enhance laterally continuous seismic events by filtering along
the structural dip.
Theory: In median filtering, the center amplitude in a dip-steered circle is replaced by the median
amplitude within the extraction circle. The effect is an edge-preserving smoothing of the seismic data.
Software: OpendTect + Dip-Steering

Workflow:
1. Open the attribute set window and open the default set called: Dip-steered median filter.
Select seismic and steering cube.
2. Use Evaluate attribute and evaluate the "Stepout" (radius) of the Volume Statistics attribute
on a (small) test element, e.g. part of an inline.
3. Apply the Median filter to the seismic data in batch: Processing - Create seismic output, or on-
the-fly: right-click on the element in the tree.
Tips:
1. Dip-steered filtering works best when you use a heavily smoothed steering cube (background
steering). Smoothing of the steering cube is done in Processing - Steering - Filter. Use e.g. a
median filter 4x4x4 to create the background steering cube.
2. Dip-steered median filtering does not create faults as sharp as a diffusion filter but the
smoothing in unfaulted areas is better. To get the best of both worlds you can combine dip-
steered median filtering with dip-steered diffusion filtering. This is explained in the Fault
enhancement filtering work flow.
For more info, see this Tutorial video:

Dip steered median filter (flash video)
Prev Home Next
Dip-steered diffusion Up Fault enhancement filter
OpendTect Workflows Documentation version 4.0
Prev
Chapter 3. Filters
Next
3.3. Fault enhancement filter
Purpose: Sharpen edges (faults) by median or diffusion filtering along the structural dip.
Theory: In Fault enhancement filtering, you evaluate the quality of the seismic data in a dip-steered
circle. If the quality is good (Similarity is high), you apply a dip-steered median filter. If the quality is
low (near faults,) you apply a dip-steered diffusion filter. The effect is smoothed seismic with sharp
fault breaks.
Software: OpendTect + Dip-Steering

Workflow:
1. Open the attribute set window and open the default set called: Fault enhancement filter. Select
seismic and steering cube.
2. Use Evaluate attribute and evaluate the "Stepout" (radius) of the median filter defined in the
Volume statistics attribute on a (small) test element, e.g. part of an inline.
3. Use Evaluate attribute and evaluate the "Stepout" (radius) of the diffusion filter defined in the
Position attribute on the (small) test element.
4. Test different cut-off values c0 (range 0 - 1) in the Mathematics attribute by applying the
Mathematics attribute on the (small) test element.
5. Apply the Fault enhancement filter to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree.
Tips:
1. Dip-steered filtering works best when you use a heavily smoothed steering cube (background
steering). Smoothing of the steering cube is done in Processing - Steering - Filter. Use e.g. a
median filter 4x4x4 to create the background steering cube.
2. Calculate Similarity (or Coherency) on dip-steered Fault enhancement filtered seismic data if
you need to see sharp faults.
Prev Home Next
Dip-steered median filter Up Ridge enhancement filter
OpendTect Workflows Documentation version 4.0
Prev
Chapter 3. Filters
Next
3.4. Ridge enhancement filter
Purpose: Sharpen ridges in a Similarity Cube.
Theory: The filter compares in the time-slice domain three neighboring similarity values in six
different directions and outputs the largest ridge value. The ridge in each direction is the: sum (values
on either side) / 2 - center value. In most evaluation points there are no ridges and the values thus
tend to be small but when you cross a fault there will be a large ridge perpendicular to the fault
direction. The filter outputs the largest value i.e. the ridge corresponding to the perpendicular
direction.
Software: OpendTect + Dip-Steering

Workflow:
1. Open the attribute set window and open the default set called: Ridge enhancement filter.
Select seismic and steering cube.
2. Apply the Ridge enhancement filter to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of a time-slice).
Tips:
1. In the default attribute set you calculate 9 similarity values at each output point. The process
can be speeded up (almost 9 times) by calculating a Similarity Cube first and to extract the
similarity values from this pre-calculated cube. You must change the attribute set. Use the
"Reference Shift" Attribute instead of Similarity.
2. Ridge-enhancement can be used to enhance any type of ridge cube. Note that ridges are either
positive or negative and that you need to modify the ridge enhancement attribute
(Mathematics max or min) accordingly.
3. In the default attribute set Dip-steering is used to create better (dip-steered) similarity. This is
not a pre-requisite for ridge-enhancement filtering.
Prev Home Next
Fault enhancement filter Up Spectral Blueing
OpendTect Workflows Documentation version 4.0
Prev
Chapter 3. Filters
Next
3.5. Spectral Blueing
Purpose: Increase the vertical resolution of the seismic data.
Theory: A matching filter is designed that shapes the seismic amplitude spectrum to resemble the
amplitude spectrum of measured logs. In the general case, amplitudes at low frequencies are reduced
while amplitudes at higher frequencies are enhanced. Detailed information that is inherent in the
seismic signal becomes visible, while noise is not increased to unacceptable levels.
Software: OpendTect + Seismic Spectral Bluing

Workflow:
1. Open the attribute set window, add a new Attribute: Spectral Blueing, select input seismic and
specify a wavelet name.
2. Press Analyze and Create ... to open the Spectral Blueing application.
3. Select Input Seismic and Well data, select well logs (right-click on the well) and time
windows for seismic and wells. To load, press Load seismic and Reload wells, respectively.
4. Select Design controls and play with the parameters (increase the smoothing operator, toggle
range and try reducing the max. frequency, toggle Auto Calc. and change low-cut and high-
cut). Notice how the curves change interactively. Chose parameters that yield a smooth
spectrum over the seismic frequency band.
5. Apply the Spectral Blueing attribute to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline).
Tips:
1. Sometimes low and high mono-frequency noise trails are observed in the spectral blued data.
The frequencies of such noise trails correspond to the cut-off frequencies specified in the
design window. To reduce these effects try changing the cut-offs. Look at the spectrum of the
operator (the blue curve) that should be straight over the seismic bandwidth without lobes on
the side.
2. Use the Chart Controller and Zoom options (View menu) to see all graphs simultaneously.
For more info, see this Tutorial video:

Spectral Blueing and Colored Inversion (flash video)
Prev Home Next
Ridge enhancement filter Up Inversion and Rock Property
Prediction
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 4. Inversion and Rock Property Prediction
Table of Contents
4.1 Colored Inversion
4.2 MPSI Stochastic Inversion
4.3 Neural Network Rock Property Prediction



OpendTect offers various plugins for quantitative seismic interpretation. Seismic Colored Inversion
(by Ark cls) is a fast way to convert seismic data to band-limited acoustic impedance. Full-bandwidth
Deterministic and Stochastic inversion is offered in plugins with the same names by Earthworks and
Ark cls. Using Neural Networks, it is possible to convert seismic information (e.g. acoustic and/or
elastic impedance) to rock properties (e.g. Porosity, Vshale etc). The supervised network is trained
along well tracks to find the (non-linear) relationship between seismic and well logs.
4.1. Colored Inversion
Purpose: A fast-track approach to invert seismic data to band-limited (relative) acoustic impedance.
Theory: A single convolution inversion operator is derived that optimally inverts the data and
honours available well data in a global sense. In this way, the process is intrinsically stable and
broadly consistent with known AI behaviour in the area. Construction of the operator is a simple
process and implementation can be readily performed within the processing module included in SCI.
As an explicit wavelet is not required, other than testing for a residual constant phase rotation as the
last step, this removes an inherently weak link that more sophisticated processes rely on.
Software: OpendTect + Seismic Colored Inversion

Workflow:
1. Open the attribute set window, add a new Attribute: Colored Inversion, select input seismic
and specify a wavelet name.
2. Press Analyze and Create ... to open the Colored Inversion application.
3. Select Input Seismic and Well data, select well logs (right-click on the well) and time
windows for seismic and wells. To load, press Load seismic and Reload wells, respectively.
4. Select Design controls and play with the parameters (increase the smoothing operator, toggle
range and try reducing the max. frequency, toggle Auto Calc. and change low-cut and high-
cut). Notice how the curves change interactively. Choose parameters that yield a smooth
spectrum over the seismic frequency band.
5. Apply the Colored Inversion attribute to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline).
Tips:
1. Use the Chart Controller and Zoom options (View menu) to see all graphs simultaneously.
Prev Home Next
Spectral Blueing MPSI Stochastic Inversion
OpendTect Workflows Documentation version 4.0
Prev
Chapter 4. Inversion and Rock Property Prediction
Next
4.2. MPSI Stochastic Inversion
Purpose: Invert seismic data to full-bandwidth acoustic impedance either in a deterministic mode
(the optimal AI volume is produced) or in a stochastic mode (N realizations are computed), so that
uncertainties in the inversion process can be analyzed and probability output volumes can be
generated.
Theory: Stochastic inversion is complementary to deterministic inversion. Stochastic inversion is
used to understand the uncertainty in seismic inversion, via the stochastic inversion utilities, and
allows the user to explore the impact of this geophysical uncertainty on the lithology, porosity or
reservoir volumes over the 3D seismic volume inverted. For thin intervals, a stochastic inversion is
particularly appropriate for understanding uncertainty and reservoir volume and connectivity. It also
allows the user to improve the estimation obtained from a deterministic inversion as the mean of
many realizations (calculated using the utilities) will behave like a global convergence solution from
computationally expensive algorithms.
Software: OpendTect + MPSI (MPSI consists of five modules that are implemented as attributes in
OpendTect's attribute engine: Model building, Error grid. and Deterministic Inversion are released as
a bundle to perform deterministic inversion. Stochastic inversion and Utilities are add-ons needed for
the stochastic inversion).

Workflow:
1. Open the attribute set window and select "EW3DModelBuilder" attribute. Define the model
by selecting the wells, zones (horizons) and the smoothing parameter.
2. Select "EW2DErrorGrid" attribute and specify a variogram that captures the uncertainty in the
model as a function of distance from the wells.
3. Select "EWDeterministicInversion" and specify the 3D model, the error grid, the seismic data,
the wavelet and some parameters to perform the deterministic inversion. Press "Perform
Preprocessing" to compute the required pre-processing step. Test in the usual way (see tips
below) and if satisfied invert the entire volume by applying the attribute in batch mode under
"Processing - Create Seismic output".

4. Select "EWStochasticInversion" and specify the 3D model, the error grid, the deterministic
inversion result, the seismic data and some parameters to compute N stochastic realizations.
Typically the number N is 100. Press "Perform pre-processing" or create a parameter file that
can be executed in batch mode to perform the required pre-processing. Stochastic realizations
can be inspected on the current element (say an inline) with the "Evaluate attributes" option.
Press this icon and select the realizations that you wish to inspect in a movie-style manner
using the slider in the Evaluate attributes window.
5. Select "EWUtilities" to perform post-processing on the N stochastic realizations. You can
choose to create a Mean volume (which will be more or less similar to the volume created in
the deterministic mode), a standard deviation volume (which gives you an estimate of the
uncertainties in the inversion process), probability cubes and trends and joined probability
cubes and trends. Probability cubes return the probability of finding a certain range of acoustic
impedance values, e.g. corresponding to the sand distribution.
Tips:
1. The implementation as attributes allows on-the-fly calculation and inspection of each step in
the workflow. For example use the "Redisplay" icon to inspect the 3D model, the error grid,
the deterministic inversion result, or any of the post-processing results from the stochastic
realizations on the current element (say an inline) in the scene.
2. Only the stochastic realizations cannot be inspected with the "Redisplay" icon. As explained
above use the "Evaluate attributes" icon to movie-style inspect different realizations.
3. The version of MPSI in OpendTect v3.2 does not support wavelet estimation functionality.
Use "Manage wavelets" to either import a wavelet, or to create a synthetic (Ricker, Sinc) one.
For more info, see this Tutorial video:

MPSI stochastic inversion (flash video)
Prev Home Next
Colored Inversion Up Neural Network Rock Property
Prediction
OpendTect Workflows Documentation version 4.0
Prev
Chapter 4. Inversion and Rock Property Prediction
Next
4.3. Neural Network Rock Property Prediction
Purpose: Predict rock properties from seismic information and well data.
Theory: A supervised neural network is trained on examples (seismic + logs) extracted along a well
track. Typically, input is (absolute) acoustic impedance and/or elastic impedance. Typical outputs are
porosity logs, lithology logs (Vshale, gamma ray, lithology codes), and saturation logs.
Software: OpendTect + Neural Networks

Workflow:
1. Open the attribute set window and specify the seismic attributes that you wish to extract along
the well track (see Tips below).
2. Open the Neural Networks window and Select Property Prediction. In the new window
specify the input attributes, the target well log, the wells, the well interval, the Location
selection (see Tips below), the log type (values or binaries) and the percentage to set aside for
testing the network during training.
3. Crossplot the target values against each of the input attributes. If need be remove / edit points.
4. Balance the data. This step ensures that each bin in the training set has the same number of
examples, which improves training.
5. Train the neural network. Stop where the test set has reached minimum error (beyond that
point overfitting occurs: the network learns to recognize individual examples from the training
set but looses general prediction capabillities). Store the trained neural network.
6. Apply the trained neural network to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline).
Tips:
1. To compensate for mis-alignment problems ,you can extract a small time-window from the
input (e.g. acoustic impedance volume). To do this use the Reference Shift Attribute and
construct an attribute set that extracts at each point e.g. the AI values at -8,-4, 0, 4, 8ms.
2. Use the "All corners" option for the Location selection parameter to compensate for
uncertainties in the well track and to increase the statistics. Each example is then extracted
along 4 well tracks that run along the corner grid points of the actual well track.

Prev Home Next
MPSI Stochastic Inversion Up Object detection
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 5. Object detection
Table of Contents
5.1 Common Contour Binning
5.2 Chimney Cube
5.3 Fault Cube
5.4 Fingerprint
5.5 UVQ waveform segmentation



Seismic attributes are used to visualize data such that relevant information becomes easier to
interpret. However, calculating many attributes leads to a data explosion and confusion: which view is
best and how can I combine interesting attributes into one output representing the optimal view? We
have introduced the term meta-attrbute for attributes that are combined in an intelligent way. In
OpendTect you can create meta-attributes using math and logic (Mathematics attribute in
OpendTect), neural networks (commercial plugin), and using the fingerprint attribute (OpendTect).
5.1. Common Contour Binning
Purpose: To detect subtle hydrocarbon-related seismic anomalies and to pin-point Gas-Water, Gas-
Oil and Oil-Water contacts.
Theory: In a structure filled with hydrocarbons, seismic traces that lie on the same (depth) contour
line will have similar hydrocarbon effects because these positions sample the same column lengths.
Stacking of traces along contour lines will therefore stack up hydrocarbon effects while stratigraphic
effects and noise are canceled. CCB (Common Contour Binning) produces two outputs: a CCB
volume that consists of traces stacked along contour lines that are re-distributed along the same
contour lines and a CCB stack. This is a 2D section with stacked traces flattened along the mapped
reference horizon. The ideas behind CCB originate with Jan Gabe van der Weide and Andries Wever
of Wintershall, who are the IP owners.
Software: OpendTect + Common Contour Binning (CCB) plugin

Workflow:
1. Create a new polygon over the prospect by right clicking on Pickset. Close the Polygon and
Save it (right-click). In general: restrict the polygon to one fault block per analysis.
2. Launch the CCB plugin by clicking on the CCB icon or selecting it from the Processing menu.
3. Specify the horizon, the seismic and the volume sub-selection (the polygon). The horizon
inside the polygon area determines the Contour Z division (you probably don't want to change
this). The step determines the contour bin-size (all traces within this contour interval are
stacked). Optionally change the Z range (this is the vertical slice around the horizon that will
be stacked. Press Go.
4. The traces inside the polygon are retrieved and sorted. A histogram is shown. For QC
purposes you can display traces that will be stacked at each contour bin (single Z option). To
produce the CCB stack (Normal or RMS) press GO. To produce the CCB volume toggle write
output to on before pressing Go.
5. The CCB volume can now be used for further analysis. For example: display the amplitude at
the horizon (add attribute and select CCB volume from stored data), or create new attributes
from the CCB volume (energy, max, min amplitude) and display these on the horizon.
Tips:
1. To determine the spill point you can add a Timeslice element (preferably depth) and move this
down over the displayed horizon map (with CCB amplitude display) until you see the contour
line that determines the spill point. A spill-point coinciding with a step-change in amplitudes
can be explained by a contact and supports the hydrocarbon fill hypothesis.
2. To avoid stacking in traces of bad quality and to ensure that you are not stacking over multiple
fault blocks (which may have different fluid-fills and/or contacts) display the similarity
attribute on the horizon. Use this display to guide the polygon picking.

For more info, see this Tutorial video:

Common Contour Binning (flash video)
Prev Home Next
Neural Network Rock Property
Prediction
Chimney Cube
OpendTect Workflows Documentation version 4.0
Prev
Chapter 5. Object detection
Next
5.2. Chimney Cube
Purpose: Create a Chimney "probability" Cube for fluid migration path interpretation.
Theory: When fluids (oil, gas, brine) move up through strata, rocks are cracked, chemically altered,
and connate gas stays behind causing changes in the acoustic properties of the rocks. The actual path
often remains visible in post-stack seismic data as subtle vertical noise trails. A Chimney Cube is a
new seismic volume that highlights such vertical disturbances so that these can be interpreted as fluid
migration paths. This requires studying the spatial relationships between the paths (chimneys) and
other elements of the petroleum system: faults, traps, HC anomalies, (paleo) mud-volcanos,
pockmarks, etc. The Chimney Cube is created by training a neural network on two sets of attributes
extracted at example locations picked by a human interpreter: one set representing the chimney class
(vertically disturbed noise trails) and the other representing the non-chimneys class (i.e. normal
seismic response).
Software: OpendTect + Dip-Steering + Neural Networks

Workflow:
1. Scan your data set for obvious chimneys. For example calculate Similarity attributes at
different time-slices, look for circular features in these slices and position seismic lines
through these circles. Chimneys are often associated with faults, high-amplitude anomalies,
and seepage-related features such as pockmarks and mud-volcanos (look at the seabed
reflection for mounds).
2. Create two New Picksets: one for Chimneys and one for Non-chimneys (right-mouse click in
the tree).
3. Pick examples for chimneys and non-chimneys (Select the Pickset in the tree and then left-
click in the scene on the element at the position you want to add; Control-click to remove a
pick). Try to pick a representative set for both chimneys and non-chimneys. This means: pick
different chimneys, pick as many points as possible (several hundred picks for each is typical);
for non-chimneys pick both low- and high energy zones, also pick (non-leaking) faults and
other noisy zones that are not vertically disturbed.
4. Open the attribute set window and open the default set called: NN Chimney Cube. Select
seismic and steering cube and Save the attribute set.
5. Open the Neural Networks window. Select Pattern recognition (Picksets). Select Supervised,
the input attributes, the Picksets (Chimneys and Non-chimneys) and the Percentage to set
aside for testing (e.g. 30%).
6. Train the neural network. Stop where the test set has reached minimum error (beyond that
point overfitting occurs: the network learns to recognize individual examples from the training
set but looses general prediction capabillities). Store the trained neural network.
7. Apply the trained neural network to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline). You can
chose between 4 outputs: Choose Chimneys to create the Chimney "probability" Cube.
Tips:
1. The default attribute set can be tuned to your data set by changing parameters, and adding or
removing attributes.
2. The colors in the neural network indicate the relative weight attached to each attribute
(ranging from white to red). White nodes indicate low weights meaning the attributes are not
contributing much and can be removed to speed up processing time.
3. Display the Mis-classified points (Pickset tree) to evaluate why these are mis-classified. If you
agree with the network you may want to remove some of these points from the input sets and
retrain the network. This will improve the classification results but the process is dangerous as
you are working towards a solution.
For more info, see this Tutorial video:

Chimney Cube (flash video)
Prev Home Next
Common Contour Binning Up Fault Cube
OpendTect Workflows Documentation version 4.0
Prev
Chapter 5. Object detection
Next
5.3. Fault Cube
Purpose: Create a Fault "probability" Cube for fault interpretation. A Fault cube is typically used to
visualize the larger scale faults. Detailed faults are better visualized by Similarity on Fault
enhancement filtered seismic data.
Theory: Several attributes in OpendTect can be used as fault indicators e.g. Similarity, Curvature,
and Energy. A Fault Cube is a new seismic volume that highlights faults by combining the
information from several fault indicators into a fault "probability". This is done by training a neural
network on two sets of attributes extracted at example locations picked by the human interpreter: one
set representing the fault class and the other representing the non-fault class (i.e. normal seismic
response).
Software: OpendTect + Dip-Steering + Neural Networks

Workflow:
1. Scan your data set for obvious faults.
2. Create two New Picksets: one for Faults and one for Non-faults (right-mouse click in the tree).
3. Pick examples for faults and non-faults (Select the Pickset in the tree and then left-click in the
scene on the element at the position you want to add; Control-click to remove a pick). Try to
pick a representative set for both faults and non-faults. This means: pick different faults, pick
as many points as possible (several hundred picks for each is typical); for non-faults pick both
low- and high energy zones, also pick noisy zones that are not faulted.
4. Open the attribute set window and open the default set called: NN Fault Cube. Select seismic
and steering cube and Save the attribute set.
5. Open the Neural Networks window. Select Pattern recognition (Picksets). Select Supervised,
the input attributes, the Picksets (Faults and Non-faults) and the Percentage to set aside for
testing (e.g. 30%).
6. Train the neural network. Stop where the test set has reached minimum error (beyond that
point overfitting occurs: the network learns to recognize individual examples from the training
set but looses general prediction capabilities). Store the trained neural network.
7. Apply the trained neural network to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline). You can
choose between 4 outputs (Faults_yes, Faults_no, Classification, Confidence): Choose
Faults_Yes to create the Fault "probability" Cube.
Tips:
1. The default attribute set can be tuned to your data set by changing parameters, and adding or
removing attributes.
2. The colors in the neural network indicate the relative weight attached to each attribute
(ranging from white via yellow to red). White nodes indicate low weights meaning the
attributes are not contributing much and can be removed to speed up processing time.
3. Display the Mis-classified points (Pickset tree) to evaluate why these are mis-classified. If you
agree with the network you may want to remove some of these points from the input sets and
retrain the network. This will improve the classification results but the process is dangerous as
you are working towards a solution.
Prev Home Next
Chimney Cube Up Fingerprint
OpendTect Workflows Documentation version 4.0
Prev
Chapter 5. Object detection
Next
5.4. Fingerprint
Purpose: Create a Fingerprint "probability" Cube, i.e. a cube that shows how similar each position is
to the position(s) where you created the fingerprint.
Theory: The fingerprint attribute has the same objective as the neural network object detection
method (e.g. Chimney Cube, Fault Cube): To detect similar seismic responses as the target response
(e.g. HC bearing). The advantage of the fingerprint is that you only need to give examples of the
object class itself (one point is sufficient). You don't have to pick counter examples (non-objects) as
is the case in the neural network workflow. A fingerprint is created from selected attributes at the
given input location(s). The ouput is a "probability" cube with values ranging between 0 (completely
dissimilar) to 1 (identical to the fingerprint response).
Software: OpendTect

Workflow:
1. Create a New Picksets, e.g. to capture the response at a hydrocarbon anomaly.
2. Pick one or more examples of the object under study.
3. Open the attribute set window and create a new attribute set with attributes on which your
fingerprint should be based. Use Evaluate attributes to select attributes that show the object
most clearly. To create a fingerprint for hydrocarbons investigate: energy, frequencies, AVO
attributes etc.
4. Add the Fingerprint attribute, select the Pickset file, add the attributes that were defined above
and Calculate the parameters (this means the attributes at the picked locations are extracted
(and averaged) to calculate the fingerprint.
5. Apply the Fingerprint attribute to the seismic data in batch: Processing - Create seismic
output, or on-the-fly: right-click on the element in the tree (e.g. part of an inline).
Tips:
1. The Fingerprint attribute assigns equal weights to each input attribute (this is where the
fingerprint loses from the (non-linear) neural network seismic object detection technique).
Therefore, try to limit the number of input attributes and do not add attributes that have
virtually similar information.
Prev Home Next
Fault Cube Up UVQ waveform segmentation
OpendTect Workflows Documentation version 4.0
Prev
Chapter 5. Object detection
Next
5.5. UVQ waveform segmentation
Purpose: Visualize patterns pertaining to a seismic window that is cut out along a horizon (horizon
slice).
Theory: In UVQ waveform segmentation, an unsupervised vector quantizer type of neural network
clusters seismic trace segments (= waveforms) into a user-specified number of clusters (typically
between 4 and 10 clusters). Two output grids are generated: the segment grid reveals patterns
pertaining to the studied interval and the match grid shows the confidence in the clustering result
ranging from 0 (no confidence) to 1 (waveform is identical to the winning cluster center). A third
useful output is a display of the actual cluster centers. As of v3.2 UVQ waveform segmentation can
be done in two ways: a fast track approach called "Quick UVQ" and in the Conventional way. The
conventional way is a two-step approach: Step 1--the user-defined number of cluster centers are
found by training the network on a representative sub-set of the data (typically 1000 trace segments
extracted at random locations along the horizon). Step 2--the trained network is applied to the
horizon. Each trace segment is compared to the cluster centers to generate two outputs: segment
(index of the winning cluster) and match (how close is the waveform to the winning cluster center).
Software: OpendTect + Neural Networks

Workflow - Quick UVQ:
1. Load a horizon and select "Quick UVQ" from the horizon menu (right-click).
2. Specify the seismic data set, the number of classes and the time-gate and press OK.
3. The software selects waveforms at random positions and starts training the UVQ network. The
Average match (%) should reach approx. 90. If it does not reach 90 reduce the number of
clusters and/or the time-window. Press OK to continue.
4. The trained UVQ network is automatically applied to the horizon. Class (=segment) and
match grids are computed and added as "attributes" to the horizon.
5. A new window with neural network info pops up. Press Display to display the class centers.
Press Dismiss to close the window.
6. Class and Match are not automatically saved! To store these grids as "Surface data" with the
horizon use the Save option in the horizon menu (right click).
Workflow - Conventional:
1. Create a New Picksets and fill this with 1000 random locations along the mapped horizon.
2. Open the default attribute set called "Unsupervised Segmentation 2D".
3. Open the Neural Networks window and Select Pattern Recognition. In the new window
specify Unsupervised, the input attributes, the Pick set generated in above and the Number of
classes. Note that the window (the input attributes) should be chosen such that it captures the
seismic response of the geologic interval to study. To visualize patterns pertaining to a
reservoir interval of approx. 30ms 2WT thickness on a mapped top reservoir horizon select a
window length of approx. -12 to 42ms. This captures the interval plus the bulk of
convolutional side effects on zero-phase data.
4. Train the UVQ network. The Average match (%) should reach approx. 90. If it does not reach
90 reduce the number of clusters and/or the time-window.
5. Store the Neural Network and Display the cluster centers by pressing Info ... followed by
Display ...
6. Apply the Neural Network Segment (or Match) to the horizon in batch: Processing - Create
output using Horizon, or on-the-fly: right-click on the horizon in the tree.
Tips:
1. If you apply the network on-the-fly you probably want to save the result as Surface data with
the horizon for later retrieval.
Prev Home Next
Fingerprint Up Sequence Stratigraphy
OpendTect Workflows Documentation version 4.0
Prev Next
Chapter 6. Sequence Stratigraphy
Table of Contents
6.1 Chrono-stratigraphy
6.2 Wheeler Transformation
6.3 Stratal Slicing
6.4 Systems Tracts Interpretation



Seismic sequence stratigraphic interpretation contains two primary goals: (1) unraveling the
depositional environment and (2) predicting potential stratigraphic traps. The OpendTect SSIS
plugin offers unique interpretation capabilities in this domain. In SSIS all possible horizons are
tracked (data-driven mode) or modeled (interpolated between mapped horizons, or shifted parallel to
upper or lower bounding surface). Each horizon is a chrono-stratigraphic event that can be used to
reconstruct the depositional history (chrono-strat slider), to flatten seismic data and attributes
(Wheeler transform), and to interpret system tracts (relating units to the relative sea level curve).
6.1. Chrono-stratigraphy
Purpose: Track or model all possible horizons within a given interval.
Theory: Map the major bounding surfaces (horizons) in a conventional way (minimum is two
horizons: top and bottom). Specify per interval how SSIS should create chrono-stratigaphic horizons
(data-driven or model-driven). In data-driven mode the seismic events are followed. This mode
requires a Steering cube (dip- azimuth information at every sample position computed with the Dip-
Steering plugin). In model-driven mode, you can choose to interpolate between horizons (a.k.a.
stratal slicing, or proportional slicing), or shift parallel to the upper horizon (emulating onlap
situations), or shifting parallel to the lower horizon (emulating unconformable settings). All modes
work for 2D and 3D seismic data. In practice, the data-driven mode is used for 2D (or on 2D sections
from a 3D cube) while the model-driven mode is used for 3D seismic data.
Software: OpendTect + Dip-Steering + SSIS

Workflow:
1. Use Data Preparation to prepare the horizons: horizons cannot cross and they should be
continuous.
2. For the data-driven mode ensure that you have a Steering Cube (Processing - Steering). Filter
the steering cube if you observe that the tracked chrono-stratigraphic horizons do not follow
the seismic events correctly (Data Preparation - Filter Steering Cube).
3. Create a New Chrono-stratigraphy. Read the horizons and specify per interval and per
horizon whether the horizon is isochronous (parallel layering) or diachronous (on- / off-
lapping settings). For data-driven mode (both horizons are diachronous) select the steering
cube and the minimum spacing (when to stop) and maximum spacing (when to insert a new
chrono-stratigrahic horizon).
4. Proceed to calculate the chrono-stratigraphy. When the batch processing is finished: Select the
chrono-stratigraphy.
5. Display the chrono-stratigraphy (right-click on the element in the tree).
6. To study the depositional history use the chrono-strat slider (right-click on chrono-
stratigraphy) and add / remove geologic time from the sequence.
7. Add a Wheeler scene and use the chrono-stratigraphy to flatten the seismic (or attribute).
Tips:
1. To highlight unconformities with 2D (data-driven) chrono-stratigraphy use the Fill between
lines option (Options menu) and play with colors.
2. To improve results add more horizons (map unconformities)
For more info, see this Tutorial video:

Chrono-stratigraphy (flash video)
Prev Home Next
UVQ waveform segmentation Wheeler Transformation
OpendTect Workflows Documentation version 4.0
Prev
Chapter 6. Sequence Stratigraphy
Next
6.2. Wheeler Transformation
Purpose: To flatten the seismic data while honoring hiatuses caused by non-deposition and erosion.
Theory: The Wheeler transform is the seismic equivalent of the geologic Wheeler diagram (= chrono-
stratigraphic chart). In a Wheeler diagram, rock units are plotted in a 2D chart of geologic time (y-
axis) versus space (x-axis). The diagram shows the temporal-spatial relationship between rock units.
Gaps in the chart represent non-deposition or erosion. In a Wheeler transform, we flatten the seismic
data (or derived attributes) along flattened chrono-stratigraphic horizons. The differences with the
Wheeler diagram are: The vertical axis in the Wheeler transformed domain is relative geologic time
(as opposed to absolute geologic time) and a Wheeler transform can be performed in 2D and 3D,
whereas the Wheeler diagram is always 2D.
Software: OpendTect + Dip-Steering + SSIS

Work flow:
1. Select a pre-calculated chrono-stratigraphy.
2. Add a Wheeler scene and use the selected chrono-stratigraphy to flatten the seismic data (or
attribute).
Tips:
1. Instead of displaying the flattened seismic, display the chrono-stratigraphy itself in the
Wheeler scene. This display is useful for studying the depositional history, especially if you
also display the same line (+ chrono-stratigraphy) in the structural domain.
Prev Home Next
Chrono-stratigraphy Up Stratal Slicing
OpendTect Workflows Documentation version 4.0
Prev
Chapter 6. Sequence Stratigraphy
Next
6.3. Stratal Slicing
Purpose: To flatten 3D seismic data (or attribute volumes) so that we can slice through the cubes to
pick up stratigraphic details.
Theory: Stratal slicing (or proportional slicing) is the interpolation option of the model-driven mode
for creating chrono-stratigraphic horizons. It is the mode in which the thickness interval between
mapped top and bottom horizon is distributed equally over the number of chrono-stratigraphic
horizons.
Software: OpendTect + Dip-Steering + SSIS

Workflow:
1. Select a pre-calculated chrono-stratigraphy.
2. Create a Wheeler Cube. (This step is required for displaying data in a Volume viewer in the
Wheeler domain.)
3. Add a Wheeler scene and load the Wheeler cube in the Volume viewer. Use the time slicer to
movie-style, inspect the data. Remember that a time-slice in the Wheeler domain corresponds
to a horizon slice in the Structural domain.
Tips:
1. To improve time-slicing in the Wheeler domain you need to improve the underlying chrono-
stratigraphy, which can be done by adding more mapped horizons.
Prev Home Next
Wheeler Transformation Up Systems Tracts Interpretation
OpendTect Workflows Documentation version 4.0
Prev
Chapter 6. Sequence Stratigraphy
Next
6.4. Systems Tracts Interpretation
Purpose: To interpret genetically associated stratigraphic units that were deposited during specific
phases of relative sea-level cycles.
Theory: In systems tract interpretation, you look at stacking patterns and lapout patterns (onlap,
toplap, offlap) in both the Structural domain and the Wheeler transformed domain. Based on these
observations you decide whether you are dealing with a transgression (Transgressive Systems Tract),
a Normal Regression (either a High Stand Systems Tract or a Low Stand Systems Tract) or falling
stage (Falling Stage Systems Tract). Nomenclature according to Hunt and Tucker. (In SSIS you can
choose which model (nomenclature) to use.)
Software: OpendTect + Dip-Steering + SSIS

Workflow:
1. Select a pre-calculated chrono-stratigraphy.
2. Add a Wheeler scene and use the selected chrono-stratigraphy to flatten the seismic data (or
attribute).
3. Open the Interpretation window, choose a model and use the chrono-strat sliders to set the
systems tract boundaries. Assign the systems tract by right-clicking in the interpretation
column. Note the automatic reconstruction of the relative sea level curve.
Tips:
1. Use arrows to indicate lapout patterns to help you decide where to set systems tract boundaries.
For more info, see this Tutorial video:

SSIS interpretation (flash video)
Prev Home Next
Stratal Slicing Up Systems Tracts Interpretation

You might also like