Professional Documents
Culture Documents
June 1999
Trademark Notices
Landmark, OpenWorks, SeisWorks, ZAP!, PetroWorks, and StratWorks are registered trademarks
of Landmark Graphics Corporation.
Pointing Dispatcher, Log Edit, Fast Track, SynTool, Contouring Assistant, TDQ, RAVE, 3DVI,
SurfCube, SeisCube, VoxCube, Z-MAP Plus, ProMAX, ProMAX Prospector, ProMAX VSP,
MicroMAX, and Landmark Geo-dataWorks are trademarks
of Landmark Graphics Corporation.
ORACLE is a registered trademark of Oracle Corporation.
IBM is a registered trademark of International Business Machines, Inc.
AIMS is a trademark of GX Technology.
Motif, OSF, and OSF/Motif are trademarks of Open Software Corporation.
UNIX is a registered trademark in the United States and other countries, licensed exclusively
through X/Open Company, Ltd.
SPARC and SPARCstation are registered trademarks of SPARC International.
Solaris, Sun, and NFS are trademarks of SUN Microsystems.
X Window System is a registered trademark of X/Open Company, Ltd.
SGI is a trademark of Silicon Graphics Incorporated.
All other brand or product names are trademarks or registered trademarks of their respective
companies or organizations.
Note
The information contained in this document is subject to change without notice and should not be
construed as a commitment by Landmark Graphics Corporation. Landmark Graphics Corporation
assumes no responsibility for any error that may appear in this manual. Some states or jurisdictions
do not allow disclaimer of expressed or implied warranties in certain transactions;
therefore, this statement may not apply to you.
Contents
Agenda
.................................................................. i
Agenda - Day 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Agenda Day 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Agenda Day 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
About The Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
How To Use The Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Contents
3-1
4-1
5-1
ii
6-1
Landmark
Contents
Landmark
iii
Contents
11-1
12-1
iv
Landmark
Contents
Landmark
Contents
VSP Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17-1
18-1
19-1
20-1
vi
Landmark
Contents
vii
Contents
24-1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25-1
Archival Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26-1
27-1
viii
Landmark
Agenda
Agenda - Day 1
Introductions, Course Agenda
ProMAX User Interface Overview
Trace Display Functionality
Landmark
Agenda-ii
Landmark
Agenda Day 2
Isolate the Upgoing Energy
After choosing the desired wavefield separation technique we will isolate the upgoing energy
Isolate the Downgoing Energy
After choosing the desired wavefield separation technique we will isolate the upgoing energy
Deconvolution
Landmark
iii
Agenda Day 3
3-Component Transforms and first break picking
3-Component Hodogram Analysis
Dataset Preparation
VSP Modelling
Cross Well Tomography Demonstration
Archive Methods
Generation of CGM Plots
Agenda-iv
Landmark
Preface
About The Manual
This manual is intended to accompany the instruction given during the
standard ProMAX VSP User Training course. Because of the power and
flexibility of ProMAX VSP, it is unreasonable to attempt to cover all
possible features and applications in this manual. Instead, we try to
provide key examples and descriptions, using exercises which are
directed toward common uses of the system.
The manual is designed to be flexible for both you and the trainer.
Trainers can choose which topics, and in what order to present material
to best meet your needs. You will find it easy to use the manual as a
reference document for identifying a topic of interest and moving
directly into the associated exercise or reference.
Landmark
vi
Landmark
Conventions
Mouse Button Help
This manual does not refer to using mouse buttons unless they are
specific to an operation. MB1 is used for most selections. The mouse
buttons are numbered from left to right so:
MB1 refers to an operation using the left mouse button. MB2 is the
middle mouse button. MB3 is the right mouse button.
Actions that can be applied to any mouse button include:
Shift-Click: Hold the shift key while depressing the mouse button.
Drag: Hold down the mouse button while moving the mouse.
Mouse buttons will not work properly if either Caps Lock or Nums Lock
are on.
Exercise Organization
Each exercise consists of a series of steps that will build a flow, help
with parameter selection, execute the flow, and analyze the results.
Many of the steps give a detailed explanation of how to correctly pick
parameters or use the functionality of interactive processes.
The editing flow examples list key parameters for each process of the
exercise. As you progress through the exercises, familiar parameters
will not always be listed in the flow example.
The exercises are organized such that your dataset is used throughout
the training session. Carefully follow the instructors direction when
assigning geometry and checking the results of your flow. An
improperly generated dataset or database may cause a subsequent
exercise to fail.
Landmark
vii
viii
Landmark
Chapter 1
Landmark
1-1
Directory Structure
/advance (or $PROMAX_HOME)
The directory structure begins at a subdirectory set by the
$PROMAX_HOME environmental variable. This variable defaults to /
advance, and is used in all the following examples. Set the
$PROMAX_HOME environment variable to /my_disk/my_world/
advance to have your Advance directory tree begin below the /my_disk/
my_world subdirectory.
/advance/sys
/advance/sys is actually a symbolic link to subdirectories unique to a
given hardware platform, such as:
/advance/rs6000 for IBM RS6000 workstations,
/advance/sparc for Sun Microsystems Sparcstations running SunOS,
/advance/solaris for Sun Microsystems Sparcstations and Cray 6400
workstations running Sun Solaris OS,
/advance/sgimips for Silicon Graphics Indigo workstations using the 32
bit operating system and
/advance/sgimips4 for Silicon Graphics Indigo and Power Challenge
workstations using the 64 bit operating system.
This link facilitates a single file server containing executable programs
and libraries for all machine types owned by a client. Machine specific
executables are invoked from the UNIX command line, located in /
advance/sys/bin.
Operating System specific executables and libraries, called from
ProMAX, are located under /advance/sys/exe. These machinedependent directories are named after machine type, not manufacturer,
to permit accommodation of different architectures from the same
vendor. Accommodating future hardware architectures will simply
involve addition of new subdirectories. Unlike menus, help and
miscellaneous files, a single set of executables is capable of running all
Advance products, provided the proper product specific license
identification number is in place.
1-2
Landmark
/exe
exec.exe
super_exec.exe
*.exe
/sys
/frame
/sdi
/3rd party software
/bin
promax
promax3d
promaxvsp
/lib
lib*.a
/port
/plot
/help
/promax
*.lok - Frame help
*.help -ASCII help
/promax3d
/promaxvsp
/promax
*.menu
Processes
/promax3d
/promaxvsp
/menu
/misc
*_stat_math
*.rgb-colormaps
ProMax_defaults
/etc
config_file
product
install.doc
pvmhosts
qconfig
license.dat
/scratch
/queues
/area
/data
/line
(or $PROMAX_DATA_HOME)
Landmark
1-3
1-4
Landmark
/sys
/exe
exec.exe
super_exec.exe
*.exe
/frame
/sdi
/3rd party
software
/bin
promax
promax3d
promaxvsp
/lib
lib*.a
/port
/plot
/help
/promax
*.lok - Frame help
*.help -ASCII help
/promax3d
/promaxvsp
/promax
*.menu
Processes
/promax3d
/promaxvsp
/menu
/misc
*_stat_math
*.rgb-colormaps
ProMax_defaults
/etc
config_file
product
install.doc
pvmhosts
qconfig
license.dat
/scratch
/queues
/data
/area
/line
(or $PROMAX_DATA_HOME)
Landmark
1-5
/advance/port
Software that is portable across all platforms is grouped under a single
subdirectory /advance/port. This includes menus and Processes (/
advance/port/menu), helpfiles(/advance/port/help), miscellaneous files
(/advance/port/misc). Under the menu and help subdirectories are
additional subdirectories for each ProMAX software product. For
instance, under /advance/port/menu, you will find subdirectories for
ProMAX 2D (promax), ProMAX 3D (promax3d), and ProMAX VSP
(promaxvsp). Menus for additional products are added as new
subdirectories under /advance/port/menu.
/advance/etc
Files unique to a particular machine are located in the /advance/etc
subdirectory. Examples of such files are the config_file, which contains
peripheral setup information for all products running on a particular
machine, and the product file, which assigns unique pathnames for
various products located on the machine.
/advance/scratch
The scratch area defaults to /advance/scratch. This location can be
overridden with the environmental variable,
PROMAX_SCRATCH_HOME.
All ProMAX development tools are included within the following
subdirectories: /advance/sys/lib, /advance/sys/obj, /advance/port/src, /
advance/port/bin, /advance/port/include and /advance/port/man.
1-6
Landmark
or
/Data
/Area
DescName
Project
Area subdirectory
and its files
/Line
DescName
17968042TVEL
31790267TGAT
36247238TMUT
12345678CIND
12345678CMAP
/12345678
HDR1
HDR2
TRC1
TRC2
Dataset subdirectory
and Header and Trace
Dataset files
/Flow1
DescName
TypeName
job.output
packet.job
A Flow subdirectory
and its files
/OPF.SIN
OPF60_SIN.GEOMETRY.ELEV
/OPF.SIN Database
subdirectory and
a non-spanned file
/OPF.SRF
#s0_OPF60_SRF.GEOMETRY.ELEV
/OPF.SRF Database
subdirectory and a
span file
Landmark
1-7
Program Execution
User Interface ($PROMAX_HOME/sys/bin/promax)
Interaction with ProMAX is handled through the User Interface. As you
categorize your data into Areas and Lines, the User Interface
automatically creates the necessary UNIX subdirectories and provides
an easy means of traversing this data structure.
However, the primary function of the User Interface is to create, modify,
and execute processing flows. A flow is a sequence of processes that you
perform on seismic data. Flows are built by selecting processes from a
list, and then selecting parameters for each process. A typical flow
contains an input process, one or more data manipulation processes, and
a display and/or output process. All information, needed to execute a
flow, is held within a Packet File (packet.job) within each Flow
subdirectory. This Packet File provides the primary means of
communication between the User Interface and the Super Executive
program. See next section, Super Executive Program.
In addition, the User Interface provides utility functions for copying,
deleting and archiving Areas, Lines, Flows, and seismic datasets;
accessing and manipulating ordered database files and parameter tables;
displaying processing histories for your flows; and providing
information about currently running jobs. The User Interface is
1-8
Landmark
Program Execution
Landmark
1-9
Executive processes
Stand-alone processes
1-10
Landmark
The basic flow of data through the Executive pipeline is shown below:
Each individual process will not operate until it has accumulated the
necessary traces. Single trace processes will run on each trace as the
traces come down the pipe. Multi channel processes will wait until an
entire ensemble is available. For example in the example flow the FK
Landmark
1-11
filter will not run until one ensemble of traces has passed through the
DDI and AGC. If we specify for the Trace Display to display 2
ensembles, it will not make a display until two shots have been
processed through the DDI, AGC and FK filter. No additional traces
will be processed until Trace Display is instructed to release the traces
that it has displayed and is holding in memory by clicking on the traffic
light icon or terminating its execution (but continuing the flow).
Note: All the processes shown are Executive processes and thus operate
in the pipeline. An intermediate dataset and an additional input tool
process is needed if a stand-alone process were included in this flow.
A pipeline process must accept seismic traces from the Executive,
process them, and return the processed data to the Executive. The table
below describes the four types of processes defined for use in the
Executive.
1-12
Landmark
Disk Data
Input
AGC
F-K Filter
Decon
Disk Data
Input
Disk Data
Output
NMO
CDP Stack
Bandpass
Filter
Disk Data
Output
One pipe must complete
successfully before a new
pipe will start processing
Landmark
1-13
Description
simple tools
ensemble tools
complex tools
panel tools
1-14
Landmark
Organization
Database Structure
Organization
The Ordered Parameter Files contain information applying to a line and
its datasets. For this reason, there can be many datasets for a single set
of Ordered Database Files.
Ordered Parameter Files, unique to a line, reside in the Area/Line
subdirectory. The Ordered Parameter Files database stores information
in structured categories, known as Orders, representing unique sets of
information. In each Order, there are N slots available for storage of
information, where N is the number of elements in the order, such as the
number of sources, number of surface locations, or number of CDPs.
Each slot contains various attributes in various formats for one
Landmark
1-15
TRC (Trace)
SRF
(Surface location)
SIN
(Source Index #)
CDP (Common
Depth Point)
CHN (Channel)
OFB
(Offset Bin)
PAT (Pattern)
XLN (Crossline)
OPF Matrices
The OPF database files can be considered to be matrices.
Each OPF is indexed against the OPF counter and there are various
single numbers per index. Note the relative size of the TRC OPF to the
other OPF files. The TRC is by far the largest contributor to the size of
the database on disk
1-16
Landmark
OPF Maftrices
Landmark
1-17
Database Structure
The ProMAX database was restructured for the 6.0 release to handle
large 3D land and marine surveys. The features of the new database
structure are listed below:
Each order is contained within a subdirectory under Area and Line. For
example, the TRC is in the subdirectory OPF.TRC.
There are two types of files contained in the OPF subdirectories:
Index: Holds the list of parameters and their formats. There is only
one index file in each OPF subdirectory. The exception to this is the
LIN OPF. The LIN information is managed by just two files, one
index and one parameter, named LIN.NDX and LIN.REC.
Span: These files are denoted by the prefix, #s. Non-span files lack
this prefix. The TRC, CDP, SIN, and SRF OPF parameters are span
files. The first span for each parameter is always written to primary
storage. Span files are created in the secondary storage partitions
listed in the config_file as denoted with the OPF keyword. Span
files may be moved to any disk partition within the secondary
storage list for read purposes. Newly created spans are written in
the OPF denoted secondary storage partitions. All subsequent spans
are written to the secondary storage partitions denoted by the OPF
keyword in a round robin fashion until the secondary storage is full.
Then, subsequent spans are created in primary storage. Span file
size is currently fixed at 10 megabytes, or approximately 2.5
million 4 byte values per span file.
1-18
Landmark
The index file for each Order must remain in the primary storage
partition. Span parameter files may be moved and distributed
anywhere within primary and secondary storage.
Within each Order, there are often multiple attributes, with each
attribute being given a unique name.
Landmark
1-19
Parameter Tables
Parameter Tables are files used to store lists of information in a very
generalized structure. To increase access speed and reduce storage
requirements, parameter tables are stored in binary format. They are
stored in the Area/Line subdirectory along with seismic datasets, the
Ordered Parameter Files database files (those not in separate
directories), and Flow subdirectories.
Parameter Tables are often referred to as part of the database. Parameter
tables differ from the OPF database in OPF files contain many attributes
that are 1 number per something. Parameter tables contain more than
one number per something. For example a velocity function contains
multiple velocity-time pairs at 1 CDP.
1-20
Landmark
Exercise
1. In a flow-building window, add the Access Parameter Tables process
to a flow and view the parameter menu with MB2.
Find the line: VEL: RMS (stacking) velocity and click on Invalid.
The list of parameter tables for RMS Velocity appear.
2. Click on Edit and select the name of the file to export.
A Parameter Table spreadsheet appears with CDP, TIME, and
SEMB_VEL columns.
3. Click on File and select Export.
An ASCII File Export window appears with export information for
quality control before actually creating the ASCII file.
4. Click on File.
A new window appears with the path to your working directory.
5. Enter a filename after the last / and click OK.
The window disappears and a dashed line appears in the ASCII File
Export window.
6. Click on Format.
An Export Definition Selection window appears.
7. Type in a selection name and click on OK.
The Column Export Definition window appears.
8. Fill the Column Export Definition with starting and ending column
numbers, then click on Save.
When you fill in the start and end columns for a particular column
definition, the contents of the column appear in the ASCII File
Landmark
1-21
Exercise
1. In a flow-building window, add the Access Parameter Tables process
and view the parameter menu with MB2.
Find the line: VEL: RMS (stacking) velocity and click on Invalid.
The list of Parameter Files(tables) for RMS velocity appear.
2. Click on Create.
The cursor will move to the top of the table name column, enter a
new velocity file name. After typing a name, press Return. A
Parameter Table spreadsheet appears with CDP, TIME, and
VEL_RMS columns.
3. Click on File and choose Import.
Two new windows appear: ASCII/EBCDIC File Import and File
Import Selection. In the File Import Selection window, choose the
path to the file containing velocity information to import and click
on OK. The import information appears in the ASCII/EBCDIC File
Import window.
1-22
Landmark
Landmark
1-23
Disk Datasets
ProMAX uses a proprietary disk dataset format that is tailored for
interactive processing and random disk access. Disk dataset files can
span multiple filesystems, allowing for unlimited filesize datasets.
A typical set of files might look like this:
/advance/data/usertutorials/landexample/12345678CIND
/advance/data/usertutorials/landexample/12345678CMAP
/advance/data/usertutorials/landexample/12345678/TRC1
/advance/data/usertutorials/landexample/12345678/HDR1
These files are described in more detail in the table below.
Table 4: Composition of a Seismic Dataset
1-24
File Name
Contents
Trace
(...TRCx)
Trace Header
(....HDRx)
Map
(....CMAP)
Index
(....CIND)
Landmark
CIND
CMAP
HDRx
TRCx
Secondary Storage
In a default ProMAX configuration, all seismic dataset files reside on a
single disk partition. The location of this disk partition is set in the
$PROMAX_HOME/etc/config_file with the entry:
primary disk storage partition: /advance/promax/data 20
In addition to the actual trace data files, the primary storage partition
will always contain your flow subdirectories, parameter tables, ordered
parameter files, and various miscellaneous files. The ...CIND and
...CMAP files which comprise an integral part of any seismic dataset are
always written to primary storage.
Since the primary storage file system is of finite size, ProMAX provides
the capability to have some of the disk datasets, such as the ...TRCx and
...HDRx files, and some of the ordered parameter files span multiple
disk partitions. Disk partitions other than the primary disk storage
partition are referred to as secondary storage.
All secondary storage disk partitions must be declared in the appropriate
$PROMAX_HOME/etc/config_file. Samples entries are:
Landmark
1-25
1-26
Landmark
Landmark
1-27
Tape Datasets
Tape datasets are stored in a proprietary format, similar to the disk
dataset format, but incorporating required structures for tape input and
output. Tape input/output operates either in conjunction with a tape
catalog system, or without reference to the tape catalog. The tape
devices used for the Tape Data Input, Tape Data Insert, and Tape Data
Output processes are declared in the ProMAX device configuration
window. This allows access to tape drives anywhere on a network. The
machines that the tape drives are attached to do not need to be licensed
for ProMAX, but the fclient.exe program must be installed.
1-28
Landmark
Landmark
1-29
Getting Started
The first step in using the Advance tape catalog is to create some labeled
tapes.
The program $PROMAX_HOME/sys/bin/tcat is used for tape labelling,
catalog creation and maintenance, and for listing current catalog
information. The program is run from the UNIX command line.
The following steps are required to successfully access the tape catalog:
1. Label tapes
1. Read and Display tape labels
1. Add labeled tapes to a totally new catalog
Before adding the tapes to a new catalog, it is a good idea to visually
inspect the contents of the label information file for duplicate or missing
entries. The contents typically look like:
0 AAAAAA 0 1 4
1 AAAAAB 0 1 4
1-30
Landmark
2 AAAAAC 0 1 4
3 AAAAAD 0 1 4
4 AAAAAE 0 1 4
The fields are: volume serial number (digital form), volume serial
number (character form), tape rack slot number, site number, and media
type, respectively. You can manually edit these fields.
1. Write a label information file from the existing catalog
1. Add labeled tapes (and datasets) to the existing catalog
1. Merge an additional catalog into the existing catalog
2. Delete a dataset from the catalog
Landmark
1-31
1-32
Landmark
Chapter 2
Landmark
2-1
2-2
Landmark
Getting Started
ProMAX is built upon a three level organizational model referred to as
Area/Line/Flow. When entering ProMAX for the first time, you will
build your own Area/Line/Flow workspace. As you add your own Area,
you may want to name it with reference to a geographic area that
indicates where the data were collected, such as Onshore Texas, or use
your name, such as daves area. Line is a subdirectory of Area which
contains a list of 2D lines from an area or a 3D survey name. After
choosing a line from the Line menu or adding a new line, the Flow
window will appear. Name your flows according to the processing
taking place, such as brute stack.
Look at the Menu Map figure on the previous page. This figure refers to
other menus you can use to access your datasets, database entries and
parameter tables. These features will be discussed later.
Exercise
In this exercise, you will build a workspace and look at some of the
available options.
Initiating a ProMAX session can be done in a variety of ways. Typically
your system administrator will create a start-up script or make a UNIX
alias, and set certain variables within your shell start-up script to make
this easy. This topic is discussed in the system overview chapter.
1. Type promax.
A product name window should pop up followed by the Area
window. The window, as shown below displays a list of all available
Areas. Other information is listed, such as owner, date and UNIX
name.
Landmark
2-3
Global Commands
Area Menu
Configuration Options
Processing Queues
Window
Exit Promax
Job Notification
and Control
The black horizontal band below the menu is called mouse button
helps. Mouse button helps describe the possible actions at the
current location of the cursor.
Below the mouse button helps are options to Exit ProMAX,
configure the queues and user interface, as well as check on the
status of jobs. These options will be discussed at length later.
The list of options running across the top of this menu: Select, Add,
Delete, Rename, and Permission are called global options. To use
these, you must first click on the option followed by clicking the line
on your screen with your Area name. The Copy option works
differently by providing popup menus to choose Areas not displayed
in this window.
2-4
Landmark
Global Commands
Area Name
Available Seismic Lines
Active Command
Line Menu
Configuration Options
Landmark
Processing Queues
Window
Exit Promax
Job Notification
and Control
2-5
5. Add a Line using the same steps as you did for adding an Area.
The Flow window appears with the following new global options:
6.
Global Commands
Available Flows
Active Command
Access Datasets
Change Products
Access Database
Flows Menu
2-6
Landmark
Exercise
Upon completion of the previous exercise, you are in the ProMAX flow
building menu (see below). From here, you will construct your flows by
ordering processes and selecting the necessary parameter information.
Once the flow is ready, you will execute it and look at the results.
1. Look at the flow building menu.
Landmark
2-7
The screen is split into two sides: a list of processes on the right and
a blank tablet below the global options on the left. You will select
from the processes on the right and add them to the left.
The list of available processes is very long. This list is ordered from
top to bottom into a general processing sequence with I/O processes
at the top and poststack migration tools further down on the list.
There is a scroll bar to help you look at the list. There are also
options available to hide processes in the secondary or More list (use
the mouse button helps).
You can customize the list to have only the processes you use most
often displayed.
2. Move your cursor into different areas of the display, such as into the
processes list, the blank tablet and the various options.
The mouse button helps are sensitive to the current cursor location.
3. Global Options for flow editing are as follows.
2-8
Landmark
small job batch queues. In order for this option to work your
system administrator should have enabled the queues when
ProMAX VSP was installed.
Note: When using Screen Display, the mouse button helps are
correct and MB1 will Execute With Normal Wait on display.
When this option is used, the Notification window first shows the
job has started and is then waiting for display. By clicking on the
Notification window, a new Processing Jobs window appears
where it waits for your response. Clicking on Wait for Display,
prompts the display to come to the foreground of the monitor.
This option is useful if you want to work on something else and
do not want to be interrupted by the display taking over the
monitor.
Exit: Brings you back to the menu listing of all your flows.
Delete
Execute
View
Exit
Trace Display
Number of ENSEMBLES/screen-----------------------10
----Default all remaining parameters for this process---7. Select SEG-Y Input parameters.
Landmark
2-9
2-10
Landmark
Delete
Execute
View
Exit
SEG-Y Input
>Trace Display<
Disk Data Output
2. Add a dataset to the datasets list in the Disk Data Output menu.
We will use this dataset in the next few exercises instead of reading
the SEGY file again.
3. Execute the flow
4. When complete, go to the datasets list and press MB2 on the file
name you just created.
You should see a summary print that shows that you have a data set
with 80 ensembles and 240 traces.
Delete
Execute
View
Exit
>SEG-Y Input<
Disk Data Input
Trace read option-------------------------------------------------Sort
Select primary trace header entry------------------------FFID
Sort order list for dataset--------------------------------1-80(2)/
Trace Display
Primary trace LABELING header entry-----------------FFID
Landmark
2-11
6. Toggle the Trace Display active and the Disk Data Output inactive
using MB3.
7. Select new Disk Data Input parameters.
Your first look at the executed job was all of the shots with all
channels. After clicking the Page Forward icon, you saw the next set
of shots. What if you wanted to look at a every other shot? What if
you only wanted to look at a single channel for each shot? These
options, and many more, are available in Disk Data Input.
8. Click on the Get All for Trace Read Option.
This toggles to Sort and the menu will automatically add three new
options:
Sort order for dataset: Allows you to restrict the amount of data
brought into the flow, such as channels 1-60.
2-12
Landmark
Delete
Execute
View
Exit
>SEG-Y Input<
Disk Data Input
Trace read option-------------------------------------------------Sort
Select primary trace header entry----------------------CHAN
Select secondary trace header entry-------------------FFID
Sort order list for dataset--------------------------------1:*/
Trace Display
Primary trace LABELING header entry--------------CHAN
Secondary trace LABELING header entry-------------FFID
>Disk Data Output<
Choose CHAN from the popup menu for primary trace header entry
and FFID for secondary.
17. Change the Sort order for dataset to 1:*.
This format specifies to build ensembles of recording channel
number and have the traces within this ensemble ordered by FFID.
Check the formats and examples for hints.
18. Execute the flow.
You will only see the trace from channel 1 for all the shots displayed
as a single ensemble
Landmark
2-13
In this case you may elect to set the primary annotation to CHAN
and the secondary to FFID.
This is a typical sort type for VSP data.
19. Select to Exit/Stop the flow.
2-14
Landmark
Chapter 3
Landmark
3-1
Trace Display
When you execute your job, the following display appears:
Trace Display Window
Icon Bar
Active Icon
3-2
Menu Bar
Mouse Help
Data Display
Landmark
Icon Bar
The following is a brief description of the Trace Display icons, located
along the side border:
Landmark
Save Image: Save the current screen image. Annotation and picked
events are saved with the trace data.
Zoom Tool: Click and drag using MB1 to select an area to zoom. If
you release MB1 outside the window, the zoom operation is
canceled. If you just click MB1 without dragging, this tool will
unzoom. You can use the zoom tool in the axis area to zoom in one
direction only.
3-3
Annotation Tool: When active you can add, change, and delete text
annotation in the trace and header plot areas. The pointer changes to
a circle when it is over text annotation. You can move an annotation
by clicking and dragging MB1, or add new annotation by clicking
MB1 when the pointer is not over an existing annotation. When the
pointer is over an existing annotation, click MB2 to delete the text
or MB3 to edit the text or change its color.
Menu bar
File has five options available in a pulldown menu. You can save your
picks, move to the next screen, make a hardcopy plot or exit Trace
Display. You have two choices when you exit. You can exit and stop the
flow, or you can exit and let the flow continue without Trace Display.
Note: Use caution when using the stop option. For example, you use
Disk Data Input to read in ten ensembles with a Disk Data Output and a
Trace Display. If you execute this flow and use the Exit/Stop Flow
option after clicking through the first five ensembles, then you will
actually output five ensembles in the output dataset as opposed to
writing out ten ensembles.
View has five options in a pulldown menu. You can control the trace
display, the trace scaling, and trace annotation parameters. You can also
choose to plot a trace header above the trace display and edit the color
map used for color displays.
3-4
Landmark
Landmark
3-5
For example, to create a parameter table file with a list of traces to kill,
click on Picking and a menu of parameter table choices appears. Click
on Kill traces. Another window appears for selecting a previous kill
parameter file or creating a new file.
When you create a new file, another window appears listing trace
headers to choose from for a secondary key.
3-6
Landmark
Picking Tool: This appears when one or more pick objects from the
Picking menu are selected. A small window with the file name will
appear on the right hand side of the screen. This means the file is
open and ready to be filled with the primary and secondary key
values of killed traces. When active, click on MB1 to pick a point
on a trace or click and drag to pick a range of traces. When the
mouse is over a picked point, the pointer shape changes into a
circle. Click and drag using MB1 to move a picked point. Use MB2
to click on a single point to delete it, or click and drag over a range
of points to delete them. Click MB3 for additional picking options.
Holding MB1 down and dragging it across several traces allows for
a consecutive number of traces to be added. To select traces from
the next shot use the Traffic light icon. The created Kill traces file
remains open and waiting for more traces to be added to the file.
To create a new parameter table such as a reverse traces file, use the Pick
icon again and select Reverse traces from the menu. After creating a new
file with a new name, choose a secondary key of CHAN. The new file
name appears in the small window on the right hand side of the screen
below the kill traces file name. The kill traces file is no longer
highlighted, meaning that it is inactive and the reverse traces file is
highlighted. If you have chosen traces to kill and reverse on the screen,
the active parameter file will have the chosen traces overplotted with a
red line. The traces chosen for the inactive table(s) will be overplotted
in blue. This helps you distinguish which file is active and which file is
inactive. Traces are only added to the active file. Select or delete traces
in the same manner using the mouse button helps at the bottom. To go
back to adding to the kill traces file, click on the kill file and use MB1
to toggle that file to active. The reverse traces file table is no longer
highlighted in black and any reverse traces picked on the screen are
overplotted in blue.
Some parameters require a top and a bottom pick, such as a surgical
mute. Once you have picked the top of the mute zone, click MB3
anywhere inside the trace portion of Trace Display. A new menu
appears allowing you to pick an associated layer (New Layer). You can
also snap your pick to the nearest amplitude peak, trough or zero
crossing.
Landmark
3-7
Miscellaneous time gates are parameter tables used for such procedures
as picking a window for a deconvolution operator design gate or
windows for time variant filtering or scaling. For this exercise pick a
decon design gate with a secondary key of AOFFSET. Picking a
miscellaneous time gate is also done in two steps. First, pick the top of
the gate by selecting points to be connected with MB1. Because
AOFFSET is the secondary key, the picks at the corresponding offset on
the opposite side of the shot will be displayed if you click MB3 in the
display field and choose Project from the popup menu. Then use MB3
to select an associated layer for the bottom half of the gate. In order to
pick another time gate, below or overlapping the previous, continue to
use MB3 to pick tops and bottoms. Time gates must always be picked in
pairs, otherwise your job may fail. Each time gate pair is also shown in
the legend box.
3-8
Landmark
Exercise
This exercise describes the way to pick a top mute. Other parameter
tables may be picked in the same fashion. Trace kills, trace reversals and
miscellaneous time gates were discussed in the previous section.
1. Build this flow:
Editing Flow: 01- Pick Parameter Tables
Add
Delete
Execute
View
Exit
Landmark
3-9
3-10
Landmark
Delete
Execute
View
Exit
Landmark
3-11
3-12
Landmark
Chapter 4
Landmark
4-1
Parameter Test
The Parameter Test process provides a mechanism for testing simple
numeric parameters by creating multiple copies of input traces and
replacing a key parameter in the next process in the flow with specified
test values. It automatically expands the processing flow, creating IF
conditional branches for each test value. The output consists of copies
of the input data with a different test value applied to each copy.
Parameter Test creates two header words. The first is called REPEAT.
This is the data copy number and is used to distinguish each of the
identical copies of input data. The second is called PARMTEST and is
an ASCII string, uniquely interpreted by the Trace Display processes as
a label for the traces.
Exercise
In this exercise, you will use Parameter Test to compare shot gathers
with different AGC operator lengths.
1. Build the following flow:
Editing Flow: 02- Parameter Test Example
Add
Delete
Execute
View
Exit
Parameter Test
Enter Parameter Values: ---------------------250|500|1000
Trace Grouping to Reproduce: ----------------------Ensemble
Trace Display
2. Read the file that we wrote to your line after reading the SEGY file.
Sort the input to have a primary sort order of CHAN and a secondary
of FFID. Get channel 1 only for all FFIDs
4-2
Landmark
Landmark
4-3
4-4
Landmark
split or branch your processing stream so that each copy of the data
may be processed with different parameters.
4-5
Finally, you may use a process called Trace Display Label to generate a
header word for posting a label on the display.
Exercise
Incorporate Reproduce Traces with IF and ENDIF to compare
processed and unprocessed data. In this exercise, we will compare the
first shot of the AGC dataset to a version with true amplitude recovery.
It is always a good idea to have a control copy, the original input, for
further comparison. This flow illustrates how to compare these three
copies.
1. Build the following flow:
Editing Flow: 03 - IF/ELSEIF Conditional
Add
Delete
Execute
View
Exit
Reproduce Traces
Trace grouping to reproduce: ----------------------Ensembles
Total Number of datasets: ----------------------------------------3
IF
SELECT Primary trace header word:-----------------Repeat
SPECIFY trace list:----------------------------------------------------1
Trace Equalization
Trace Display Label:--------------- EQ
ELSE
Trace Display Label:-------- Original Input
ENDIF
Trace Display
2. Read the file that we wrote to your line after reading the SEGY file.
4-6
Landmark
Sort the input to have a primary sort order of CHAN and a secondary
of FFID. Get channel 1 only for all FFIDs
3. In Reproduce Traces, enter 3 for the total number of datasets.
You will generate two additional copies, one ensemble (record) at a
time.
4. Select Repeat for Select Primary trace header word in IF and
ELSEIF.
IF acts as the gate keeper, providing the mechanism for selecting or
restricting traces which will be passed into a particular branch of the
flow. Header words are used (just as in Disk Data Input) to uniquely
identify the traces to include or exclude in a particular branch.
In the first IF conditional, select REPEAT as the primary trace
header and 1 (copy number) as the trace list entry. Data copy 1 is
passed to AGC in this example. The ELSEIF condition passes the
second data copy number (REPEAT=2) to Trace Equalization.
The ELSE process selects all traces, not previously selected with IF
or ELSEIF. In our case, having selected two of the three copies of
data for filtering, leaves only the third data copy (REPEAT=3) for
the ELSE branch. In this example, no additional processing is
applied to this copy. It is the control copy.
5. Use Trace Display Label to create labels for each copy.
Label the copies according to their unique processing. For example,
label the first copy with AGC, the second with EQ and the final copy
with Original Input.
6. Select to use a hand input design gate for the Trace Equalization
and use the default parameters.
7. Modify Trace Display to do each of the following in two different
executions:
Landmark
4-7
Exercise
In this exercise you will run Interactive Spectral Analysis in the simple
mode.
4-8
Landmark
Delete
Execute
View
Exit
Landmark
4-9
There are many different displays that you can interactively turn on
and off. Remember that you have control of your display when you
are selecting parameters.
6. Select Options/PreFFT Time Window, and turn on the Boxcar.
You have a lot of control from within the interactive session to
modify your analysis.
7. Activate the Zoom icon to enlarge the trace data.
In this case, your F-X spectrum is zoomed as well.
8. From the File pull down select to Exit and Stop the flow.
Exercise
1. Rerun the flow after changing to Single Subset mode.
Editing Flow: interactive spectral analysis
Add
Delete
Execute
View
Exit
4-10
Landmark
Now the trace data in the top middle of the screen is the subset of
data you just defined with the corresponding spectra also displayed.
4. Click on the Select Rectangular Region again.
5. Click MB2 inside the zoom window on the left data display
window to drag the box to another location and click MB2 again to
redisplay the zoom window.
6. Try resizing the selection window with the other mouse button
options.
7. From the File pulldown select to Exit and Stop the flow.
Landmark
4-11
Exercise
1. Rerun the flow after changing to the Multiple Subset mode.
Editing Flow: interactive spectral analysis
Add
Delete
Execute
View
Exit
4-12
Landmark
Chapter 5
Landmark
5-1
5-2
Landmark
Geometry Diagram
source location
500 ft east of the
well
surface elevation
and Kelly Bushing elevation
= 0 ft
8150 ft
recording
level
increment
= 50 ft
1
2 each level has a three component
recording tool
3
12100 ft
Landmark
5-3
5-4
Landmark
Chapter 6
Landmark
6-1
Delete
Execute
View
Exit
SEGY Input
Type of storage to use: ----------------------------- Disk Image
Enter DISK file path name: ----------------------------------------------------------------------/misc_files/vsp/vsp_segy
MAX traces per ensemble: ----------------------------------------3
Remap SEGY header values -----------------------------------No
6-2
Landmark
Delete
Execute
View
Exit
SEGY Input
Type of storage to use: ----------------------------- Disk Image
Enter DISK file path name: ----------------------------------------------------------------------/misc_files/vsp/vsp_segy
MAX traces per ensemble: ----------------------------------------3
Remap SEGY header values -----------------------------------No
Landmark
6-3
6-4
Landmark
Chapter 7
VSP Geometry
VSP Geometry Assignment takes advantage of the simplicity of the
spatial relationship between the source and receiver positions in VSP
data. This helps to minimize the input required to describe the geometry.
Some VSP data is very complex and incorporates a lot of varied
information to describe the geometry. Some holes are deviated
(crooked) and you may have inclination and azimuth information at all
recorded depth levels. In these cases you may also have two sets of
depth information: log depth and vertical depth. The Spreadsheets have
been written to handle all such information.
Our case is very simple, using a non-deviated hole.
Landmark
7-1
Exercise
1. Build a flow to Assign VSP Geometry.
Editing Flow: Spreadsheet / Geometry
Add
Delete
Execute
View
Exit
7-2
Landmark
Landmark
7-3
7-4
Landmark
Landmark
7-5
This step completes building the look up tables and other database
finalization functions.
24. Select the Finalize Database option and click on the OK button.
You should see a window indicating that the VSP geometry
finalization has completed successfully.
25. Dismiss the Status window by clicking on OK.
26. Click on the Cancel button in the binning dialog box to dismiss this
window.
7-6
Landmark
Landmark
7-7
Delete
Execute
View
Exit
7-8
Landmark
Exercise
This exercise QCs the headers.
1. Build a new flow to re-read the data and plot it to check the new
values in the trace headers
Editing Flow: qc geometry load
Add
Delete
Execute
View
Exit
Trace Display
Number of ENSEMBLES per screen --------------- 80
Primary trace LABELING ------------------------------------ FFID
Secondary trace LABELING ----------------------- REC_ELEV
INCREMENT for Secondary annotation ------------------- 12
2. Input the traces with the new geometry and check the headers with
the Header Dump capabilities in Trace Display.
Plot 80 ensembles and annotate each FFID and every 12th receiver
elevation.
Landmark
7-9
You should see the correct shot X value, and receiver elevation
values.
.
NOTE:
The receiver depths go into receiver elevation not receiver depth.
7-10
Landmark
Chapter 8
Landmark
8-1
Delete
Execute
View
Exit
Trace Length
New trace length ----------------------------------------------- 2000
8-2
Landmark
If you are successful, the Trace Display plot should look as follows:
Landmark
8-3
Delete
Execute
View
Exit
Trace Length
New trace length ----------------------------------------------- 2000
>Trace Display<
8-4
Landmark
Chapter 9
Landmark
9-1
Delete
Execute
View
Exit
Trace Display
Number of ENSEMBLES per screen -------------------------- 1
Primary trace LABELING ---------------------------------- CHAN
Secondary trace LABELING ----------------------- REC_ELEV
INCREMENT for Secondary annotation ------------------- 12
2. In Disk Data Input, input the previously created file containing the
vertical trace.
This file is one ensemble of all traces from channel 1
3. In Trace Display, plot 1 ensemble.
You may also want to set the annotation heading to be CHAN on the
first line and then plot every 12th receiver elevation on the second.
4. Execute the Flow.
5. Select the Picking pulldown menu, and choose to edit the first
arrivals in the database.
You will be prompted to select a type of attribute. You will write
these first break times to an attribute of type GEOMETRY in the
TRC database called FB_PICK.
Landmark
9-2
6. The Pick editing icon on the left side of the plot will automatically
be selected for you.
7. Pick the arrivals with the rubber-band and then snap to the desired
phase with MB3.
It is suggested to pick the first strong, continuous peak.
8. Edit any picks as you see fit.
9. Exit the program to save the picks to the database.
9-3
Landmark
9-4
Landmark
Chapter 10
Landmark
10-1
Exercise
1. Build the following flow to compute the average velocity:
Editing Flow: generate avg.velocity function
Add
Delete
Execute
View
Exit
10-2
Landmark
Landmark
10-3
10-4
Landmark
Chapter 11
Landmark
11-1
11-2
Landmark
Delete
Execute
View
Exit
Velocity Manipulation*
Type of velocity table to input ----- Average Vel in Depth
Get velocity table from database entry ------------------ Yes
Select input velocity database entry --------------------------------------------------from raw first break pick times
Combine a second velocity table ---------------------------- No
Resample the input velocity table? ------------------------- No
Shift or stretch the input velocity table ------------------- No
Type of parameter table to output ---------------------------------------------------------------- Stacking (RMS) Velocity
Select output velocity database entry ------------------------------------------------------------------- from raw average
Spatially resample the velocity table ---------------------- No
Output a single average velocity table -------------------- No
Smooth velocity field --------------------------------------------- No
Vertically resample the output velocity table ----------- No
Adjust Output velocity by percentage --------------------- No
Landmark
11-3
2. Input the average velocity function that was computed from the first
arrival times before smoothing and convert it to an RMS function.
You might want to name the output table from raw average.
3. Display the output function using the point editor.
4. Rerun the same flow using the smoothed average function that you
created earlier. Convert it to an RMS function using the option:
from smoothed average.
Editing Flow: 06- compute RMS from AVG vel
Add
Delete
Execute
View
Exit
Velocity Manipulation*
Select input velocity database entry ---------------------------------------------------------------------smoothed version
Select output velocity database entry ----------------------------------------------------------- from smoothed average
11-4
Landmark
If you zoom in around a single output point on either plot, you will see
that there are actually two points at each time knee separated by only a
couple of ms.
Landmark
11-5
Delete
Execute
View
Exit
Velocity Manipulation*
Select input velocity database entry ---------------------------------------------------------------------smoothed version
Select output velocity database entry ------------------------------------------------------- from smoothed average
Spatially resample the velocity table ---------------------- No
Output a single average velocity table -------------------- No
Smooth velocity field --------------------------------------------- No
Vertically resample the output table ----------------- Yes
Time step sizes for the output table ------------------- 48
Adjust Output velocity by percentage --------------------- No
11-6
Landmark
Delete
Execute
View
Exit
Database/Header Transfer
Direction of transfer -- Load TO trace headers FROM db
Number of parameters -------------------------------------------- 1
First database parameter --- TRC GEOMETRY FB_PICK
First header entry --------(FB_PICK) First break pick time
Parameter Test
Enter parameter VALUES ----------------- 2|4|6|8|10|12
Trace grouping to reproduce ---------------------- Ensembles
Trace Display
Number of ENSEMBLES per screen -------------------------- 7
2. Input the file with only the vertical traces and process all traces.
Landmark
11-7
11-8
Landmark
Delete
Execute
View
Exit
Database/Header Transfer
Direction of transfer -- Load TO trace headers FROM db
Number of parameters -------------------------------------------- 1
First database parameter --- TRC GEOMETRY FB_PICK
First header entry --------(FB_PICK) First break pick time
>Parameter Test<
VSP True Amplitude Recovery
Final Selected Parameters
Landmark
11-9
11-10
Landmark
Chapter 12
Landmark
12-1
Exercise
1. Build the following flow to apply the first break pick times as a static
to flatten the down going energy.
Editing Flow: 08- wavefield separation
Add
Delete
Execute
View
Exit
Header Statics
Bulk shift Static -------------------------------------------------- 100
What about previous statics ---- Add to previous statics
Apply how many static header entries --------------------- 1
First header word to apply --------------------------- FB_PICK
How to apply header statics ------------------------- Subtract
12-2
Landmark
Landmark
12-3
Delete
Execute
View
Exit
Header Statics
Bulk shift Static ------------------------------------------------------ 0
What about previous statics ---- Add to previous statics
Apply how many static header entries --------------------- 1
First header word to apply ------------------------------alinstat
How to apply header statics -------------------------------- Add
12-4
Landmark
Landmark
12-5
Delete
Execute
View
Exit
Header Statics
Bulk shift Static ------------------------------------------------------ 0
What about previous statics ---- Add to previous statics
Apply how many static header entries --------------------- 1
First header word to apply ------------------------------alinstat
How to apply header statics -------------------------------- Add
12-6
Landmark
Delete
Execute
View
Exit
Landmark
12-7
You may find that setting the trace display to display 3 vertical
panels will help you do this comparison.
12-8
Landmark
Landmark
12-9
Exercise
1. Expand the previous flow to do 2D spatial filtering to estimate and
subtract the downgoing energy.
Editing Flow: wavefield separation
Add
Delete
Execute
View
Exit
Bandpass Filter
Default all parameters EXCEPT
Ormsby filter frequency values ------------- 8-12-100-125
12-10
Landmark
Test values of 3 |5| 7 |9| 11 |13| 15 | 19 for the number of traces in the
filter.
3. In 2D Spatial Filtering, apply a Single Sample, Simple 2D Median
Filter to Subtract the downgoing energy from the total flattened
wavefield.
In the Minimum Number of traces for Subtraction parameter, use a
minimum of 3 traces in the filter and fold live traces back over the
edge to make sure that there are always enough traces for the filter.
4. Apply a fairly wide open zero phase Ormsby Band Pass filter to
suppress any adverse side effects of the median filter.
For this data at a 4 ms sample rate, apply a filter of 8-12-100-125.
5. Display the results using Trace Display.
You may find that setting the maximum time to display to 700 ms
prior to display may save you some time in the zooming process.
You may also find that setting the display to plot 5 horizontal panels
will be helpful.
You may also want to reset the Trace Display to do one vertical panel
with 1 ensemble per screen and use the screen swapping capabilities
within Trace Display to compare the different results.
6. After selecting the length of filter that works best, rerun the flow to
QC the output section.
Toggle the Parameter Test inactive and input the proper filter length
(11) in the 2D Spatial Filter process instead of the 99999 for the
parm test.
Landmark
12-11
7. Add a Trace Display Label after the Median Filter to annotate these
data for future reference.
12-12
Landmark
F-K Analysis
Using an F-K filter to separate the input data into various dip
components is another very effective means of separating the flattened
downgoing energy from the dipping upgoing energy, thus separating the
upgoing from the downgoing. We can plot the flattened data in the F-K
plane and estimate various fan filters and/or polygonal filters to isolate
one of the dip components.
Using the Interactive F-K Analysis process, you can interactively test
various reject and accept F-K polygons to keep the upgoing and
downgoing.
Landmark
12-13
Exercise
1. Expand the previous flow to add an F-K Analysis to pick the fan
filter, or polygon filters to apply.
Editing Flow: wavefield separation
Add
Delete
Execute
View
Exit
-------------->Trace Display<
Note: Toggle the median filter, bandpass filter, and Trace Display
steps inactive.
2. Select F-K Analysis parameters.
There are 80 traces per panel and the traces are separated by 50 ft.
Add a Parameter Table name for the FK-Polygon.
We may elect to use polygon editing or we may just measure
velocities to use a fan function in the F-K filter process.
12-14
Landmark
3. Use the dx/dt tool to measure the apparent velocity of the up-going
energy in flattened space on the F-K Analysis section.
The velocity should be about 6700 ft./sec.
4. Pick a positive and negative velocity cut to apply as a fan filter in FK Filter.
Numbers like -4000 and + 20000 are good choices for a reject filter
to keep the upgoing.
You may choose numbers like -20000 and +20000 as an accept filter
to keep the downgoing.
Note: If you are working with polygons, be careful about how you
set the Accept and Reject options.
5. Generate the Filtered Output panel to QC the polygon and
parameters.
Landmark
12-15
Delete
Execute
View
Exit
F-K Analysis
-------------->Trace Length<
>Trace Display<
2. Input your velocities as a fan filter and/or try any picked polygons
to Reject the downgoing and keep only the upgoing. Use the
defaults for padding and tapering.
12-16
Landmark
Suggested parameters are to use a fan filter of -4000 and + 20000 ft./
sec in reject mode. With this velocity the K-space wrap parameter
should be set to No. QC the output with F-K analysis.
Landmark
12-17
a design gate from which the dip component matrix weights are
computed
In general the application and subtraction gates are the entire time range
of the data. The design gates should be restricted to a good data zone.
For VSP data, this is the area near the first arrivals.
When operating on data that has been flattened on the first arrivals, the
low percentage eigenvectors are the flattened downgoing energy and the
high percentages are the dipping upgoing. In this exercise you will
design the eigenvectors over a time window around the first arrivals
using a fairly short spatial window and then subtract the low percentage
values from the input to extract the upgoing energy.
12-18
Landmark
Exercise
1. Alter the existing flow to use the Eigenvector Filter to separate the
wavefields.
Editing Flow: wavefield separation
Add
Delete
Execute
View
Exit
Landmark
12-19
.
Editing Flow: wavefield separation
Add
Delete
Execute
View
Exit
--------------Parameter Test
Enter parameter VALUES ------------------- 3|7|11|15|19
Trace grouping to reproduce ---------------------- Ensembles
Eigenvector Filter
Mode ----------------------------- Subtract Eigenimage of Zone
Get matrix design gates from DATABASE --------------- No
SELECT Primary header word ---------------------------- FFID
SPECIFY design time gate ---------------------------- 1:0-500/
Get application gates from DATABASE ------------------- No
SELECT Primary header word ---------------------------- FFID
SPECIFY application gate -------------------------- 1:0-2000/
Get Subtraction gate from DATABASE -------------------- No
SELECT Primary header word ---------------------------- FFID
SPECIFY subtraction gate -------------------------- 1:0-2000/
Type of Computation ------------------------------------------ Real
Horizontal window width -------------------------------- 99999
Start percent of eigenimage range ---------------------------- 0
End percent of eigen image range -------------------------- 10
Re-apply trace mutes after filter --------------------------- Yes
--------------Trace Display
Note: Toggle the F-K filter and F-K Analysis inactive in the flow
1. Design a test of the Eigenvector filter over the first arrivals
Use a constant design window for all FFIDs from 0-500 ms and
apply a filter over the entire time range (0-2000 ms). Also, subtract
over the entire time range from 0-2000 ms. Test values of 3, 7, 11,
15, and 19 for the trace window width and subtract the first 10
percent of the Eigen images.
12-20
Landmark
2. You may want to test various panel widths, design gates, and Eigen
image percentage ranges.
Note that you cannot use the Parameter Test sequence to test the
percentage ranges.
3. Try various Trace Display configurations:
1) Each output ensemble individually and then swap the screens.
2) All ensembles on the same screen.
Note that the Eigen Filter is very difficult to test because the
percentage to keep range varies as a function of the length of the
filter.
Landmark
12-21
Delete
Execute
View
Exit
12-22
Landmark
Display.
3. If desired, an AGC or other type of gain function may be applied.
4. Experiment with various display options to compare the results
from the different separation techniques.
Display all three 80 trace ensembles on the screen at the same time.
Landmark
12-23
Delete
Execute
View
Exit
12-24
Landmark
2. Suppose that the F-K Filter was selected as the best option to isolate
the upgoing energy.
3. Comment out all other processes and Add in a Header Statics to
Remove the previous statics.
Set the number of header statics to apply to 0.
4. Add in a Disk Data Output to save the upgoing energy in a file for
later processing.
Editing Flow: wavefield separation
Add
Delete
Execute
View
Exit
-------------Header Statics
Bulk shift static ------------------------------------------------------ 0
What about previous statics -- Remove previous statics
Apply how many static header entries --------------------- 0
HOW to apply header statics ------------------------------- Add
-------------Note: You may want to toggle the Trace Display inactive for this
exercise to ensure that all traces get processed.
If you leave the Trace Display turned on you will find that the
display is not very good because we have returned the data to
original recorded time but the display is set for the first 700 msec
only.
Landmark
12-25
Delete
Execute
View
Exit
12-26
Landmark
Landmark
12-27
Delete
Execute
View
Exit
12-28
Landmark
3. Change the dataset name in Disk Data Output to save the down
going energy for later processing.
4. In this case, also make sure that the Header Statics process is
toggled inactive.
Why do we leave the statics applied to the downgoing data?
Landmark
12-29
Delete
Execute
View
Exit
Trace Display
Number of ENSEMBLES per screen -------------------------- 3
In the Disk Data Input and Insert processes, get three input files: the
original input, the separated upgoing with statics removed, and the
separated downgoing with the statics still applied.
2. In Trace Display, select to plot three ensembles.
3. Plot the first break picks on the traces.
They should plot at about the start of the reflection data on the
upgoing.
Note: This is meaningless on the downgoing.
12-30
Landmark
Chapter 13
VSP Deconvolution
Deconvolution of VSP data involves the generation of an inverse filter
designed to compress an input wavelet to a zero phase wavelet. The
input wavelet is commonly extracted from the separated downgoing
energy. A filter is designed to compress this energy into a zero-phase
wavelet centered on the first arrival time. This filter is then applied to
the upgoing data to remove the source signature from the reflection
energy and output a zero phase wavelet at the actual time of the
reflection generation interface.
Some design gate determination is commonly performed to isolate the
wavelet from which the inverse filter is designed. This design gate
generally starts at zero time, envelopes the first arrivals and progresses
in time for a couple of hundred milliseconds. The maximum time of the
gate typically comes immediately after the last consistent reverberation
of the first arrival.
Landmark
13-1
Delete
Execute
View
Exit
Trace Display
2. Input the separated, flattened downgoing data.
3. All of the Trace Display parameters may be defaulted.
4. Using the Pick pulldown menu, select to pick a Bottom Mute to be
applied prior to inverse filter design.
When prompted for a header entry to use for the mute function,
select FFID as the header entry over which to vary the mute start
times. Set the bottom mute to start at about 400 ms.
5. Exit the program to save the mute parameter table.
13-2
Landmark
Delete
Execute
View
Exit
Trace Muting
Reapply previous mutes --------------------------------------- NO
Mute time reference ---------------------------------------- Time 0
Type of mute -------------------------------------------------- bottom
ending ramp --------------------------------------------------------- 30
EXTRAPOLATE mute times --------------------------------- YES
get mute file from the database ---------------------------- Yes
Select mute parameter file -- decon design bottom mute
Trace Display
2. Apply the mute that was just picked as a Bottom Mute.
3. Display the result.
Landmark
13-3
Delete
Execute
View
Exit
Trace Display
2. Input the separated, flattened downgoing data and apply the bottom
mute to limit the design gate.
3. Select Filter Generation parameters.
After applying a Hanning Window taper over 100% of the input
wavelets (zero percent flat), design and output to disk 1000 ms
inverse filters where time zero on the input trace is 100 ms and use
3% white noise.
4. Plot the output from Filter Generation.
The plotted traces are the actual filters to be applied.
13-4
Landmark
Landmark
13-5
Deconvolution Filter QC
Exercise
1. Expand the previous flow to apply the filters and QC the results on
the down going data.
Editing Flow: 12 - VSP decon
Add
Delete
Execute
View
Exit
Trace Display
2. Input the separated, flattened downgoing data.
3. Select VSP Deconvolution parameters.
Apply filters that have been mixed over 5 FFIDs and exclude 1 filter
trace on each end. Make sure that the zero reference time of the filter
is correct. This should be set to 500 ms. which is the center time of
the filter traces.
4. Add a label for display
Is the peak of the zero phase wavelet at the correct time?
13-6
Landmark
Exercise
1. Build a flow to apply the decon filters to the upgoing data.
Editing Flow: 12 - VSP decon
Add
Delete
Execute
View
Exit
>Trace Muting<
>Filter Generation<
VSP Deconvolution
Trace Display Label
Trace Label --------------------------------- upgoing with decon
Trace Display
Number of ENSEMBLES per screen -------------------------- 2
2. Input the separated upgoing data at original recorded time.
Landmark
13-7
13-8
Landmark
Exercise
1. Expand the previous flow to read two files from disk and then do a
spectral analysis on each.
Editing Flow: 13 - spectral analysis
Add
Delete
Execute
View
Exit
Landmark
13-9
13-10
Landmark
Chapter 14
Landmark
14-1
Exercise
1. Build a flow to pick the top and bottom mute to define the corridor
to stack.
Editing Flow: corridor stack
Add
Delete
Execute
View
Exit
Trace Display
2. In Disk Data Input, input the deconvolved upgoing data file.
3. Use Trace Display to plot the trace.
You may find that adjusting the minimum and maximum display
time will help you position your mutes.
4. From the picking pulldown menu, select to define a top mute.
Define the mute to set the Top of the corridor.
When prompted, select FFID as the header entry over which to vary
the mute start times.
Note: This mute should be about the same time as the first arrivals.
5. From the picking pulldown menu, select to define a bottom mute.
Define the mute to set the Bottom of the corridor. It is normal to
make the corridor about 100 ms wide.
14-2
Landmark
Exercise
1. Expand the existing flow to add in two Trace Muting processes.
Editing Flow: corridor stack
Add
Delete
Execute
View
Exit
Trace Muting
Re-apply previous mutes-----------------------------------------No
Mute time reference------------------------------------------Time 0
TYPE of mute---------------------------------------------------Bottom
Starting ramp--------------------------------------------30.
EXTRAPOLATE mute times?-----------------------Yes
Get mute file from the DATABASE?-------------------------Yes
SELECT mute parameter file--------------------------------------------------------------------corridor stack bottom mute
--------Trace Display
2. In Disk Data Input, input the deconvolved upgoing data file.
3. In Trace Muting, apply the Top and Bottom mutes.
Landmark
14-3
Do not forget that one is a Top mute and the other is a Bottom mute.
4. Display the result with Trace Display.
14-4
Landmark
Exercise
1. Expand the existing flow to add in the processes associated with
VSP Corridor Stack and optional enhancement programs.
Editing Flow: 14 -corridor stack
Add
Delete
Execute
View
Exit
Landmark
14-5
.
Editing Flow: 14 -corridor stack
Add
Delete
Execute
View
Exit
14-6
Landmark
3. Apply the One Way NMO correction using the RMS velocity
function that was generated earlier for the Spherical Divergence
Correction.
Use the resampled RMS from the smoothed average.
4. In VSP Corridor Stack, apply the Top and Bottom mutes and add
the first arrival times from the header as a static.
Make 5 copies of a mean stack trace. For display purposes, apply a
bulk shift static correction of -900 ms.
5. Write the Corridor Stack traces to a disk dataset.
6. If desired, add in the AGC and/or Bandpass Filter before and/or
after stack to help with the cosmetic appearance of the stack traces.
7. Add a new Trace Display to plot the corridor stack.
Landmark
14-7
Exercise
1. Build the following flow:
Editing Flow: splice corr stk into stack
Add
Delete
Execute
View
Exit
Bandpass Filter
Default all parameters
Trace Display
2. In Disk Data Input, input the Final Stack file.
3. In Trace Label, add a label called Stack.
4. In Splice Datasets, splice in the Corridor Stack at CDP Bin Number
820 and pad with 3 dead traces.
5. Apply a bandpass filter and amplitude scaler (AGC) for cosmetic
purposes.
14-8
Landmark
Landmark
14-9
14-10
Landmark
Chapter 15
Landmark
15-1
15-2
Landmark
Delete
Execute
View
Exit
Velocity Manipulation*
Type of velocity table to input ----- Average Vel in Depth
Get velocity table from database entry ------------------ Yes
Select input velocity database entry --------------------------------------------------from raw first break pick times
Combine a second velocity table ---------------------------- No
Resample the input velocity table? ------------------------- No
Shift or stretch the input velocity table -------------------- No
Type of parameter table to output --------------------------------------------------------------------- Interval Vel in Depth
Select output velocity database entry ------------------------------------------------------------------- from raw average
Spatially resample the velocity table ---------------------- No
Output a single average velocity table -------------------- No
Smooth velocity field --------------------------------------------- No
Vertically resample the output velocity table ----------- No
Adjust Output velocity by percentage --------------------- No
15-3
We will not do any editing, so you can output to the same table as
you are reading from.
Are there any problems with this interval velocity function?
Exercise
1. Expand the flow to generate a new interval velocity vs. depth
function from the smoothed average velocity vs. depth function.
Editing Flow: 16- generate intv-depth function
Add
Delete
Execute
View
Exit
>Velocity Manipulation*<
>Velocity Viewer/Point Editor*<
Velocity Manipulation*
Select input velocity database entry ---------------------------------------------------------------------smoothed version
Select output velocity database entry ----------------------------------------------------------- from smoothed average
Vertically resample the output table ----------------- Yes
Time step sizes for the output table ------------------- 48
15-4
Landmark
------- from raw avg ------- from smoothed avg ----Note: There are two points very close together on both functions so
you can elect to resample the function in Velocity Manipulation
prior to output.
Landmark
15-5
Exercise
One of the requirements for the VSP migration is that the velocity field
span the entire range of the output image area. Since we may want to
image events recorded below the bottom of the well, we must expand the
velocity field in depth to cover the proposed image area. We will also
resample the output intv-depth function to the original sample period of
50 ft.
1. Edit the existing flow.
Editing Flow: 16- generate intv-depth function
Add
Delete
Execute
View
Exit
>Velocity Manipulation*<
>Velocity Viewer/Point Editor*<
Velocity Manipulation*
Select input velocity database entry ---------------------------------------------------------------------smoothed version
Select output velocity database entry ----------------------------------------------------------- from smoothed average
Vertically resample the output table ----------------- Yes
Time step sizes for the output table ------------------- 50
15-6
Landmark
5. Remember to go into edit mode and you may elect to edit the
velocity function in preparation for migration.
Edit the smoothed version and output a Velocity Function for VSPCDP transform and Migration.
Landmark
15-7
15-8
Landmark
Chapter 16
Landmark
16-1
Delete
Execute
View
Exit
VSP/CDP Transform
Horizontal binning interval -------------------------------------- 5
CDP at which to extract vel function --------------------- 100
Specify trace length of output trace in msec -------- 3000
Select how velocity is to be specified ------------ Database
Select a velocity file ---------------- from smoothed average
Ray trace interval ------------------------------------------------- 20
Datum elevation ----------------------------------------------------- 0
Allowable percentage of moveout stretch ---------------- 50
Trace Display
Primary trace LABELING header ----------------------- NONE
Secondary trace LABELING header ---------------- RBIN_X
2. In Disk Data Input, input the upgoing data with decon applied.
3. Select the VSP/CDP Transform parameters.
16-2
Landmark
Use the interval velocity function that was created from the
smoothed average function and edited. Build a trace every 5 ft. to 3
sec, and ray trace every 20 ft.
4. Use Trace Label to label the traces as the VSP-CDP transform. In
Disk Data Output, output the file.
5. Plot the output traces using Trace Display.
Plot 1 ensemble.
You will probably want to make the display window smaller in order
to see the traces more clearly.
6. Look at the headers of the traces and find the new header word that
you can use to best annotate above the traces
Exercise
1. Expand the existing flow to redisplay the VSP-CDP transform.
Editing Flow: VSP-CDP transform
Add
Delete
Execute
View
Exit
Bandpass Filter
Default all parameters
Trace Display
2. In Disk Data Input, input the VSP-CDP transform.
3. Apply a bandpass filter and AGC for cosmetic appearance.
4. Display the traces using Trace Display.
Landmark
16-3
Plot the traces by annotating the RBIN_X header word above the
traces. This will plot a value representing the distance from the
borehole above the traces.
Note: This is a user-defined attribute.
You may want to enhance the appearance of the transform by
applying a trace mix and/or adjusting the scaling and/or bandpass
filter parameters.
16-4
Landmark
Chapter 17
VSP Migration
For VSP surveys where the source is offset from the well location, it is
possible to migrate the recorded data. The migration produces a high
spatial resolution seismic section that allows you to image reflection
events in the vicinity of the bore-hole looking in then plane defined by
the well bore and the shot location. Unlike the VSP-CDP transform, the
migration can look on the opposite side of the borehole. This may help
identify faults and/or the attitude of dipping reflected events.
The migration differs from the VSP-CDP transform in that the transform
is a simple mapping function that takes a point on a shot to receiver trace
and maps that point to a single reflection point in the subsurface. The
migration operation is similar to that for surface seismic data, where it
attempts to place a data point at all locations from which it could have
originated. The migration can be a time consuming process depending
on the size of the output image area, the selected algorithm and the size
of the dataset.
Landmark
17-1
VSP Migration
Exercise
1. Build the following flow to migrate the VSP data:
Editing Flow: VSP migration
Add
Delete
Execute
View
Exit
17-2
Landmark
Delete
Execute
View
Exit
Landmark
17-3
17-4
Landmark
Chapter 18
Landmark
18-1
Exercise
1. Build a flow to Assign VSP Geometry.
Editing Flow: Spreadsheet / Geometry
Add
Delete
Execute
View
Exit
18-2
Landmark
Y-Coordinate
1100
1000
1100
100
929.92
1050,1050
929.92
1000
1100
1070.71
X-Coordinate
100 on the surface
1-3
50
1000
log
depth
difference
995
true
vertical
depth
difference
4-6
7-9
10-12
13-15
Landmark
18-3
18-4
Landmark
18-5
16. Open the Bin menu and select to Assign trace geometry by pattern
information.
18-6
Landmark
Landmark
18-7
18-8
Landmark
Chapter 19
Landmark
19-1
19-2
Landmark
Chapter 20
What are the best primary and secondary sort orders for picking
analysis time gates?
How many traces and recording levels per shot do I have per shot
record?
Landmark
20-1
Exercise
1. Build a flow to plot the input data.
Editing Flow: level statics - vertical stack
Add
Delete
Execute
View
Exit
Receiver Elevation
SHT_GRP
If these header words did not already exist, how could you build
them?
20-2
Landmark
Exercise
In this exercise, we will pick the level statics correlation time gate.
1. Edit the flow to toggle the VSP level Statics process inactive..
Editing Flow: level statics- vertical stack
Add
Delete
Execute
View
Exit
We could combine all of the traces into one ensemble and then
pick the times as a function of receiver elevation
2. In Disk Data Input, sort the input with a primary sort key of CHAN
and a secondary of REC_ELEV.
This will combine all traces into one ensemble with the traces
ordered as a function of the receiver elevation.
3. Pick a miscellaneous time gate with a secondary key of rec_elev
and select times on the first trace and last trace about 50 ms before
the first arrivals.
4. Using MB3, Project the pick times to all of the other traces.
Landmark
20-3
You should see that all traces recorded at the same receiver elevation
have the same time.
5. Add a New Layer using MB3 to this table.
Pick the bottom time of the correlation gate about 100 ms below the
top time.
6. Use MB3 to Project the times to the other traces.
Exit the Trace Display program and save the table to disk.
20-4
Landmark
Exercise
1. Expand the flow to add the VSP level Statics process:
Editing Flow: level statics - vertical stack
Add
Delete
Execute
View
Exit
In our case we have two header words to choose from, the Receiver
Elevation and the SHT_GRP. We will use the SHT_GRP header
word for this exercise.
There are a maximum of 5 shots in a group.
3. The maximum separation between groups of SHT_GRP must be set
to a value less than 1.
4. Analyze the vertical Recording Channel Number from each shot
[channel 1].
5. You can expect a maximum static shift of about 5 ms.
Landmark
20-5
20-6
Landmark
Exercise
1. Expand the flow to add the Header Statics, Apply Fractional Statics
and the Trace Display:
Editing Flow: level statics - vertical stack
Add
Delete
Execute
View
Exit
Landmark
20-7
Exercise
With a little rearranging we can produce a comparison plot to look at the
data before and after the level statics application.
1. Expand the flow to compare the traces before and after level statics
application.
Editing Flow: level statics - vertical stack
Add
Delete
Execute
View
Exit
20-8
Landmark
Landmark
20-9
Delete
Execute
View
Exit
20-10
Landmark
Delete
Execute
View
Exit
Landmark
20-11
20-12
Landmark
Chapter 21
Landmark
21-1
21-2
Landmark
Chapter 22
Landmark
22-1
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Exercise
1. Build the following flow to plot the input data:
Editing Flow: level statics
Add
Delete
Execute
View
Exit
Trace Display
Specify display END time-------------------------------400
Number of ensembles(line segments)/screen------------10
2. In Disk Data Input, input the synthetic shot record dataset.
This dataset can be found in the VSP tutorials area.
3. In Trace Display, plot 10 ensembles.
4. Estimate the time of the first arrivals for each set of shots.
In the next exercise we will need some time gate information.
At approximately what time are the first arrivals on this dataset for
each set of 5 shots?
22-2
Landmark
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Exercise
1. Expand the flow to compute and apply level_statics.
Editing Flow: level statics
Add
Delete
Execute
View
Exit
Header Statics
First header word to apply:--------------------------LVL_SHFT
Landmark
22-3
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
of the first arrivals. This analysis window will be constant for the
first 5 FFIDs and change to a new constant for the second 5.
1:100-200/5:100-200/6:50-150/10:50-150
3. Read the VSP Level Statics helpfile to determine the name of the
Header Attribute to apply as a static in Header Statics.
4. After applying the LVL_SHFT statics using the Headers, apply the
fractional remainder with Apply Fractional Statics.
Exercise
With a little rearranging we can produce a comparison plot to look at the
data before and after the level statics application.
22-4
Landmark
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
1. Modify the flow to compare the traces before and after level statics
application.
Editing Flow: level statics
Add
Delete
Execute
View
Exit
Reproduce Traces
Total number of datasets------------------------------------------2
IF
SELECT Primary trace header word:------------REPEATED
SPECITY trace list:----------------------------------------------------1
ELSEIF
Trace selection MODE:-------------------------------------Include
SELECT Primary trace header word:-----------REPEATED
SPECIFY trace list:----------------------------------------------------2
ENDIF
In-line Sort
Select new PRIMARY sort key:----------------------- REPEAT
Select new SECONDARY sort key:------------------------FFID
Max. traces per output ensemble:----------------------------60
Number of traces in buffer:------------------------------------120
Ensemble Redefine
Mode of application:-------------------------------------Sequence
Max traces per output ensemble:-------------------------------6
Trace Display
Number of ensembles(line segments)/screen------------10
2. Add Reproduce Traces and IF-ELSEIF-ENDIF.
Landmark
22-5
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
3. In Inline Sort, resort the data by Repeat number and FFID for
display.
We have a total of 60 traces per ensemble and a total of 120 traces in
the sort buffer.
4. Split the Repeat ensembles back into individual shot ensembles
using Ensemble Redefine.
We will take each sequence of 6 consecutive traces as one output
ensemble.
5. In Trace Display, plot 10 ensembles per screen and use the screen
swap functionality to compare the data before and after level static
adjustment.
22-6
Landmark
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Exercise
1. Modify the previous flow to vertically stack shots by hand input shot
groups for common receiver depth levels.
Editing Flow: vertical stack
Add
Delete
Execute
View
Exit
Landmark
22-7
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
22-8
Landmark
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Exercise
1. Rearrange the flow to input the data and plot it via Trace Display.
Editing Flow: vertical stack
Add
Delete
Execute
View
Exit
Landmark
22-9
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Delete
Execute
View
Exit
22-10
Landmark
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
Delete
Execute
View
Exit
Landmark
22-11
Chapter 22: Level Stat and Vertical Stack for Multi Component / Multi Level
22-12
Landmark
Chapter 23
Landmark
23-1
Exercise
1. Build a flow to construct an RMS trace and display the results.
Editing Flow: three component transform
Add
Delete
Execute
View
Exit
3-Component Transforms
Header word for selecting replacement trace:----Geophone component (x,y,z)
Value of replacement trace header ---------------------------2
Select 3-component transform to apply:-----------------Sum Squares Stack
Maximum time to calculate transform (ms):----------1500
In Line Sort
Select new PRIMARY sort key:-----------------------Geophone component (x,y,z)
Select new SECONDARY sort key:------------------------FFID
Maximum traces per output ensemble:--------------------80
Number of traces in buffer:----------------------------240
Trace Display
Number of ENSEMBLES(line segments)/screen:---------1
Number of display panels:--------------------------------3
Trace Orientation:---------------------------------------Horizontal
2. In Disk Data Input, read the real data with the correct geometry in
the headers.
This file still has 3 traces per shot and has a primary sort order of
FFID.
3. Select 3-Component Transform parameters.
23-2
Landmark
Landmark
23-3
Exercise
1. Build a flow to copy the time pick from 1 component to the other
components.
Editing Flow: copy first break picks
Add
Delete
Execute
View
Exit
23-4
Landmark
4. In Assign Common Ensemble Value, copy the first break pick time
from channel 1 to the other 2 channels of each shot.
5. Transfer the copied first break times from the trace header back to
the database.
Each trace has a first arrival time in the trace header, but there is no
attribute in the database that has a first break time for all traces that
is correct. For future reference it would be advisable to make a copy
of the copied arrival times in the database.
6. Write the output data to a new file.
Landmark
23-5
Exercise
1. Display the picks in the database.
Exit from the flow. Click on the global Database button.
Use the Database display tool to graph the various picks and
compare the results.
2. Expand the flow to reread the new data file and plot the first breaks.
Editing Flow: copy first break picks
Add
Delete
Execute
View
Exit
23-6
Landmark
Use the Picking pulldown menu to select the first breaks from the
trace headers, or the database.
All three traces per FFID should have the same pick time. Check the
values by using the header dump facility.
Landmark
23-7
23-8
Landmark
Chapter 24
Landmark
24-1
Delete
Execute
View
Exit
24-2
Landmark
Plot the first arrival times, and use the arrival times as a basis for the
analysis window. Do not output the analysis window to a time gate
file. Write the orientation values to the trace headers.
Landmark
24-3
Original
Vertical and
Two
Horizontal
Traces
Original
Vertical and
Radial and
Transverse
Horizontal
Traces
Oriented
Vertical,
Transverse Vertical
and
Transverse Horizontal
Traces
Hodogram
of Oriented
Horizontal
and Original
Vertical
24-4
Landmark
2) maximize the first trace on the third panel of traces at the same
polarity as the original vertical trace.
Landmark
24-5
Delete
Execute
View
Exit
Trace Display
9. Read the file that was just created.
You will want to sort the input with a primary ensemble sort order of
hodo_typ and sort the traces within these ensembles to increase by
FFID.
10. You may want to experiment with different display options.
A best first guess would be to use Trace Display and plot 5
ensembles.
You may also want to try 1 ensemble per screen and 5 horizontal
panels.
24-6
Landmark
Landmark
24-7
24-8
Landmark
Chapter 25
Landmark
25-1
Exercise
1. Build a flow to look at the trace headers.
Editing Flow: prepare input data
Add
Delete
Execute
View
Exit
3. In Trace Display, plot the data and view the trace headers to identify
the header values that may need to be altered prior to the start of
processing.
Since we have no idea how this data is organized, use all defaults for
Trace Display except specify to plot 100 ensembles. This will help
you identify what an ensemble is and then how to deal with the data.
4. Derive an equation to use to assign the FFIDs from 1 to 80. Also
note that the Geophone (x,y,z) header word does not exist and must
be set equal to the channel number.
25-2
Landmark
Exercise
1. Expand the previous flow to rebuild the trace headers and write the
file to your own line directory.
Editing Flow:
Add
Delete
Execute
View
Exit
Landmark
25-3
25-4
Landmark
Chapter 26
Archival Methods
Archiving your data protects your work from system failure and may
allow you to bring data into other software packages. The archiving
methods can be run from both inside and outside the ProMAX User
Interface. In this chapter, we will discuss options for archiving your
data.
Landmark
26-1
SEG-Y Output
ProMAX offers a variety of industry standard and individual company
output formats. Of these, SEG-Y is the most common. This process can
write out industry standard SEG-Y tapes as well as frequently requested
non-standard variations of SEG-Y and IEEE format. SEG-Y Output is a
good choice for archiving a dataset that will later be loaded to a third
party software package. This process will successfully archive data
spanning over multiple disks. One downfall to this archival method is
that it will not automatically map all the ProMAX trace headers.
However, SEG-Y Output provides you the capability of mapping these
non-standard trace headers.
Exercise
In this exercise, you will write a SEG-Y formatted tape, mapping some
non-standard SEG-Y headers. We will check to make sure the headers
were mapped correctly by using SEG-Y Input and Screen Display.
Depending on the availability of a tape drive on the system, this exercise
may be modified to write a SEG-Y disk image.
1. Build the following flow:
Editing Flow: SEG-Y Output
Add
Delete
Execute
View
Exit
>SEG-Y Input<
>Trace Display<
2. Select Disk Data Input parameters. Select two shots from your Raw
Shots with Geometry dataset.
Limit the dataset size for efficiency.
3. Select SEG-Y Output parameters.
26-2
Landmark
Enter the tape drive device name. Select Yes to Remap SEG-Y
headers. Map the defaulted header values, sou_sloc, rec_sloc, and
cdp_sloc.
The SEG-Y format reserves bytes 181-240 for optional use. The
*_sloc trace headers are important to ProMAX so we typically write
them to the extended headers. These header values must be present
in order to automatically rebuild the database files with the Extract
Database Files process.
4. Put tape in tape drive.
5. Execute the flow.
6. Once the job is completed, build the following flow to QC the
headers.
Editing Flow: SEG-Y Out
Add
Delete
Execute
View
Exit
Trace Display
7. Select SEG-Y Input parameters.
Make sure the formats are consistent with those specified in SEG-Y
output.
8. Select Yes to Remap SEGY headers. This loads the extended
headers that you mapped with SEG-Y output.
9. Execute the flow.
10. Click on the Header icon in Trace Display to QC the headers.
The extended header values should be preserved (rec_sloc and
sou_sloc).
Landmark
26-3
Exercise
In this exercise, you will view trace headers in the dataset, write a
ProMAX formatted tape and read the tape back in to make sure the
headers are preserved.
1. Exit out of ProMAX by selecting the Exit at the bottom of the User
Interface.
2. Set the environment variable BYPASS_CATALOG = t in your
ProMAX start-up script or your .cshrc file, by including the line
setenv BYPASS_CATALOG t (for the c shell).
This will deactivate the tape cataloging system. Information about
this system is located in the helpfile index under seismic datasets and
tape datasets.
3. If you set the environment variable in your .cshrc file, type source
.cshrc.
This will reinitialize your .cshrc file.
4. Type promax.
5. Build the following flow:
Editing Flow: Tape Data Output
Add
Delete
Execute
View
Exit
26-4
Landmark
6. Select Disk Data Input parameters. Select two shots from your Raw
Shots with Geometry dataset.
Limit the dataset size for efficiency.
7. Execute the flow.
8. Click on the Header icon in Trace Display to view the trace headers.
9. Exit out of Trace Display
10. Toggle off Trace Display and toggle on Tape Data Output using
MB3.
Editing Flow: Tape Data Output
Add
Delete
Execute
View
Exit
Landmark
26-5
Delete
Execute
View
Exit
26-6
Landmark
UNIX tar
The UNIX tar command is handy for archiving files, such as datasets,
flows, and OPFs residing on one disk such as your primary disk data
storage.
Exercise
1. Put a tape in the tape drive.
2. In an X-window, change directories to your line directory using the
cd command.
3. Type ls.
This lists all the files in your line directory
4. Select the flow that you want to archive.
5. Type tar -cvf /dev/(tape drive device name;rmt0) ./(flowname).
This command copies your flow directory and the files contained
underneath that directory to tape.
6. When files are copied, type tar -tvf /dev/ (tape drive device
name) at the prompt.
This command types the files contained on your tape to screen. This
step should always be done when you are using tar to archive files to
make sure the archive worked. You can also redirect the output to a
file by typing:
tar -tvf /dev/(tape drive device name) > (file name
with tape list)
If you wanted to place archived files back to disk, you would type
the following command:
tar -xvf /dev/(tape drive device name) ./(flowname).
Landmark
26-7
Archive to Tape
The UNIX tar command was discussed in the previous section.
Although this works fine in many situations, ProMAX also includes an
inline archive program, Archive to Tape (sometimes referred to as ctar),
designed specifically for seismic datasets. The program ctar has some
advantages over the UNIX tar commands such as the ability to span tape
volumes on all platforms, flexible use of ProMAXs secondary storage
for seismic trace datasets and checking for available disk space before
writing files during restore operations. Also, you may use this
functionality in conjunction with the Advance Tape Catalog.
The related process, List/Restore from Tape reads ProMAX archive
tapes and restores the data to disk.
Exercise
In this exercise, you will archive your ProMAX Area to tape, list the
tape contents and restore your Area back to disk.
1. Add an Area/Line called archive/archive with permissions of 775 or
777.
You may not need to do this in the classroom or, for that matter, at
your workplace if this Area/Line has already been created.
The purpose of creating this new Area/Line is to prevent you from
archiving a line by executing a flow from within the line to be
archived.
2. Build the following flow:
Editing Flow: ARCHIVE
Add
Delete
Execute
View
Exit
Archive to Tape
>List/Restore from Tape<
3. Select Archive to Tape parameters.
4. Click on Invalid to select an Area.
26-8
Landmark
Delete
Execute
View
Exit
>Archive to Tape<
List/Restore from Tape
8. Select List/Restore from Tape parameters.
Select Simple List for Type of operation.
9. Select Catalog is Bypassed for Select Archive.
10. Click on Invalid to select a tape drive device path.
11. Execute the flow.
Choose to continue when the popup menu appears. Verify that your
Area exists on the archive tape by looking at your job.output file.
12. From the ProMAX user interface, delete the Area you just archived.
You can remove the files from within the process after archiving.
13. Select Restore to Change Type of operation.
14. Execute the flow.
Choose to continue when the popup menu appears. If you view your
job.output file, you will see that the files were written to disk.
15. Exit out of ProMAX using the Exit button at the bottom and then
get back into ProMAX by typing promax.
16. Verify that your Area is restored.
Landmark
26-9
26-10
Landmark
Chapter 27
Landmark
27-1
Emacs Editor
27-2
Landmark
Cursor movement:
Use the 4 cursor arrow keys
Point the mouse cursor and click button 1
Ctrl-A Move the cursor to the beginning of the current line
Ctrl-E Move the cursor to the end of the current line
Ctrl-V Scroll the screen forward (down) one screen
Meta-V Scroll the screen backward (up) one screen
Meta-Shift-<Jump to the beginning of the file
Meta-Shift-> Jump to the end of the file
Ctrl-S Search forward for a string; (start entering string)
Ctrl-R Search backward for a string; (start entering string)
Editing:
All keyboard entry is in insert mode
Delete key Delete one character to the left of the cursor (Backspace
for DEC)
Ctrl-D Delete one character to the right of the cursor
Ctrl-K Kill to the end of the line (from the cursor)
Ctrl-Y Yank back the contents of the kill buffer (created by Ctrl-K or
Ctrl-W); cut and paste; (can move the cursor first)
Meta-X, then type repl s Search and replace; (follow prompts)
Ctrl-X, Ctrl-W Write new Emacs file; (enter path & filename)
Ctrl-X, Ctrl-S Save current Emacs file
Ctrl-X, Ctrl-F Find another Emacs file
Ctrl-X, I Insert a file at current cursor location
Ctrl-X, Ctrl-C Exit Emacs
Landmark
27-3
Exiting Emacs:
Ctrl-X, Ctrl-C; (then respond Y or N to saving)
27-4
Landmark
UNIX Commands
Alphabetical summary of general purpose UNIX commands used in
conjunction with ProMAX.
cat
Concatenate and Display Files
UNIX$ cat [options] [files]
Option:
-n print output line numbers with each line
cd
Change Directory
$ cd [directory]
chmod
Change Access Modes
$ chmod [options] mode names
Option:
-r recursively change directory tree
Mode can be numeric or symbolic
The symbolic case is of the form [agou][+-=][rstwx] where:
a group, other and user, access permissions
g group access permissions
o other access permissions
u user access permissions
+ add the permission to current status of files
- remove the permission from status of files
Landmark
27-5
cp
copy files
$ cp [options] file 1 file 2
df
Report Free Block Count
$ df [options][filesys][file]
Options:
-i print number of modes free and in use
files df reports on file system containing files
filesys is a list of device names or mounted directory names to report
(default = all mounted)
27-6
Landmark
du
Summarize Disk Usage
$ du [options][names]
Options:
-a generate entry for each file
-s only display a grand total summary (default is entry for each
directory)
names directory names or filenames
grep
Search File for Pattern
$ grep [options]expr [files]
Landmark
27-7
kill
Terminate Process
$ kill -l
Options:
signal send signal instead of terminate
0 for process-id implies all processes resulting from current login
ln
Make Links to File
$ ln [option] file1 file2
login
Sign On to System
$ login [option][user]
ls
List Contents of Directories
$ ls [options][names]
27-8
Landmark
man
Print Manual Entries
$ man -k keywords
Landmark
27-9
dir
Create Specified Directories
$ mkdir directories
more
View file by Screenful or by Line
$ more [options][files]
Options:
-c redraw page one line at a time
-d prompt after each screenful
-f count by newlines instead of screen lines
-l treat formfeed (L) as ordinary character
-n window size (default set with stty)
+n start viewing file at line n
-s reduce multiple blank lines to one
-u suppress terminal underlining or enhancing
+/pat start two lines before line containing pat
Enter h when more pauses for interactive options
mv
Move Files (See CP)
$ mv [options] file1 file2
27-10
Landmark
Options:
- following arguments are filenames
-f force overwriting of existing files
-i interactive mode
ps
Report Process Status
$ ps [keys][-t list][process-id]
Keys:
a print all processes involving terminals
c print internally stored command name
e print both environment and arguments
g print all processes
k use/vmcore in place of /dev/kmem and /dev/mem for debugging l
long listing
n process number (must be last key)
s add size of kernel stack of process to output
tn list processes associated with terminals; n is terminal number
(must be last key)
u include fields of interest to user
U update namelist database (for speed)
v print virtual memory statistics
w 132 column output format
ww arbitrarily wide output
x include processes with no terminal
Landmark
27-11
pwd
Print Working Directory Name
$ pwd
rcp
Copy Files Between Machines
$ rcp [option] file1 file2
rlogin
Login on Remote Terminal
$ [rlogin] remote [options]
Options:
-8 allow 8 bit data path
-ec specify new escape character c
-l user user is login name on remote system
-L run remote session in litout mode
remote remote host system
rlogin is optional if /usr/hosts in search path
rm
Remove Files
$ rm [options] files
27-12
Landmark
Options:
- treat all following arguments as filenames
-i ask for confirmation before each delete
-r recursively delete directories
rmdir
Remove Empty Directories (See RM)
$ rmdir directories
su
Become Another User (Set User)
$ su [options][user]
tar
Tape file Archiver
$ tar [key][option][files]
Landmark
27-13
Function Modifiers:
0...9 specify which tape drive to use (0 default)
b next arg is blocking factor (20 default, 20 max)
B force I/O blocking at 20 blocks per record
f arch arch is the file to be used for input/output to archives (if-then
stdin read)
h follow symbolic links
l complain if all file links not found
m update file modification times
v verbose mode
w wait for confirmation after reporting filename (y causes action to
be performed)
Option:
-C dir change directory to dir
who
Who is on the System
$ who [file][am i]
Arguments:
file read instead of /etc/utmp for login information
am i output who you are logged in as
whoami
Print Effective User-Id
$ whoami
27-14
Landmark
cp -r /advance/data/offshore .
cp is the copy command. The -r tells the system that you want to copy
recursively (useful for copying directories trees). The directory from
which you are copying in this case is /advance/data/offshore. Note the
final ., which denotes the target directory. The single . means the current
directory. Be careful about how you specify the target directory. If you
told the system to copy the files to a directory offshore and this directory
already exists, then the files will end up in offshore/offshore.
df
df shows the amount of free space on all the currently mounted file
systems, including remotely mounted file systems. The listing will show
you which of the file systems are remotely mounted. It is possible to
specify one file system and see the amount of free space in only that file
system. If you do not specify a file system, then df will default to
showing all the mounted file systems. There are many other options for
df which you may find useful.
du -s offshore
The du command summarizes disk usage. It can show disk usage file by
file. When the -s option is given, only a grand total summary of disk
Landmark
27-15
kill -9 2367
The kill command will stop a current process by sending a signal. The
process number in this case is number 2367, which was found by using
the ps command. There are many modifiers for this command, but one
which you should know is the -9. This makes it impossible for the
process to ignore the signal.
You might use this when a process is locked up and there is no other way
to stop it.
ln -s /advance/data2/oswork offshore
The ln command means link. The -s denotes a symbolic link. This can
be used to link files on different file systems. A normal link, sometimes
known as a hard link, specified as ln without the -s, cannot link between
file systems.
This symbolic link will cause the directory /advance/data2/oswork to
appear in the current directory under the name offshore. It is not a new
directory, or a copy of the oswork directory in /advance/data2. When
you access a file in your directory called offshore, you are actually
accessing the original file in the directory /advance/data2/oswork.
27-16
Landmark
ps -ax
The ps (process status) command shows all of the processes currently
running on the system. The -a tells the system to display all processes
except process group leaders and processes not started from terminals.
The x shows processes without control terminals. If you do not specify
the x, then you may not see the process for which you are looking. The
-ax on Berkeley UNIX changes to -elf on System V UNIX. The l
provides a long form of the listing, -f provides a full listing of the
processes, and -e asks for every process on the system.
rcp -r neptune:/usr/disk2/offshore .
rcp is the remote copy command. The -r, as with the cp command, is the
recursive form of copy. It will copy the /usr/disk2/offshore directory and
its subdirectories from the named server. The destination directory is .,
the current working directory.
rmdir offshore
The rmdir command removes directories. In this example the rmdir
command will remove the directory offshore. rmdir will only remove an
empty directory. If you still have entries in the directory, this command
will fail. You can check the contents of the directory, to see if it contains
files you meant to keep. Or you can use the rm -r command, at your own
risk.
tar c /advance/data/offshore
The tar command (tape archive) is used for moving files to or from tape.
The c means create, so a new tape will be created. The directory to be
copied to tape is /advance/data/offshore. To copy more directories to
tape, just list them after the first directory, separated by spaces. x in
place of the c will extract files from the tape and copy them to the disk.
Landmark
27-17
tar x with no files listed will read everything off the tape. tar x followed
by a file name, directory or path will only read the data if it exists on the
tape. This is a safe way to get back a specific dataset from the tape. The
v option is verbose, so that you can see what the process is doing.
Otherwise, like most UNIX processes, it is silent. You may wish to
investigate cpio as a more versatile alternative to tar.
tar c ./offshore
This tar command copies to tape the directory offshore and the files
which belong to the directory offshore. The ./ preceding offshore
indicates that offshore is a subdirectory of the current working directory.
It is generally best to use relative path names (rather than full path
names) when you are using tar.
27-18
Landmark