The document discusses how contour maps summarize three-dimensional subsurface structures based on seismic or other geophysical data. Good contouring fills in reflection times and depths between data lines in a geologically plausible way. Closely spaced data lines allow more confident contouring, while widely spaced lines introduce ambiguities that could miss prospects or misinterpret structures. Specific geological processes produce characteristic contour patterns that reveal features like dips, folds, faults, and depositional landforms.
The document discusses how contour maps summarize three-dimensional subsurface structures based on seismic or other geophysical data. Good contouring fills in reflection times and depths between data lines in a geologically plausible way. Closely spaced data lines allow more confident contouring, while widely spaced lines introduce ambiguities that could miss prospects or misinterpret structures. Specific geological processes produce characteristic contour patterns that reveal features like dips, folds, faults, and depositional landforms.
The document discusses how contour maps summarize three-dimensional subsurface structures based on seismic or other geophysical data. Good contouring fills in reflection times and depths between data lines in a geologically plausible way. Closely spaced data lines allow more confident contouring, while widely spaced lines introduce ambiguities that could miss prospects or misinterpret structures. Specific geological processes produce characteristic contour patterns that reveal features like dips, folds, faults, and depositional landforms.
The interrelation between a three-dimensional surface and its contour map is
illustrated in Figure 1 .
Figure 1
We see a fence diagram of seismic sections, cut off at an important horizon to be mapped. The surface represented by that horizon is three-dimensional, and is specified only along the seismic lines. The contour map (superimposed above the fence sections) honors the reflection-time information on each section, but also fills in the reflection times between the sections; this it does in a geologically plausible manner. There, then, is the importance of good contouring; to the extent that the contouring embodies the constraints of geological plausibility, it saves us seismic lines. Good contouring, performed in the interpretation stage, is cheap; additional seismic lines are expensive. Today much contouring is done by machine. However, the expert interpreter must understand what the machine does, and where it is likely to fall short. Where it does fall short, he must be able to do better. And he must be skilled not only in making contour maps but in reading contour maps made by others, so that from the two- dimensional contour map he can instantly visualize the three-dimensional surface. Within a minute or two, the expert interpreter knows the structural style, the location of the trap, the location of the spill point, and the structural risk in the prospect. SHAPES AND CONTOURS Specific geological processes yield characteristic shapes. There is therefore a two- way interaction between the contouring of a surface and the processes that gave rise to that surface: understanding the processes helps us to do the contouring, and the message of the completed contours helps us to identify the geographical and geological setting. We start by exploring the relation between the shape of a surface and the contour map of that surface. The contour map of a plane horizontal surface (in the jargon, a "flat" surface) is blank. If a plane surface dips, the contours are parallel and equally spaced. For a defined contour interval, widely spaced contours imply gentle dip, and closely spaced contours imply steep dip. The contours represent structural strike; locally, on each contour, the direction of steepest descent is perpendicular to the contour. Smooth surfaces are characterized by contours that are gently curved and near parallel; rough surfaces are characterized by tortuous contours and a "busy" appearance. This last distinction acquires immediate significance when we remember that the sea floor is usually dominated by depositional processes, and is generally fairly smooth; the surface of the land is often dominated by erosional processes, and can be very rough indeed. Erosional processes and depositional processes yield distinctive contour maps. Let us consider the sea floor first. A contour map of the sea floor is often called a bathymetric chart. Let us consider such bathymetric maps at several levels of detail. At the grossest level, the oceanic fringes of the continents are characterized (as shown in Figure 1 (a)) by the coastal plain, the continental shelf , the continental slope, the continental rise, and the abyssal plain.
Figure 1
We expect generally parallel contours--widely spaced on the shelf, closely spaced on the slope ( Figure 1 (b)). If we look more closely, we probably find that the outer part of the shelf, and the slope, are cut by submarine canyons ( Figure 2 ) . We note the characteristic contour pattern, and how the contour spacing tells us that the canyons are V- shaped.
Figure 2
More closely again, we can see that the shelf itself is not uniform. Perhaps it is dominated by carbonates, formed in banks ( Figure 3 ) with their characteristic contour pattern: low relief, elongated, aligned along depositional strike.
Figure 3
Yet more closely, we see (in Figure 4 ) the formation of submarine fans at the mouths of the submarine canyons.
Figure 4
The piles of sediment, and their contours, have the characteristic fan shape. In this example the contours are used to represent the thickness of the fan sediments. All of these marine contour patterns are fairly smooth. On land, in contrast, the contour patterns reflect the nature of the erosion (which is a function of the climate, the characteristics of the rock, and the rate of uplift) and the time for which erosion has been active. Thus the Alps and the Himalayas, uplifted comparatively recently, are characterized by highly complex contours. The interior of Australia, on the other hand, is a peneplain representing a very mature erosional surface; this message is conveyed by its sparse contours. The contours of a land surface may indicate alluvial fans (rather like submarine fans), young river valleys (V-shaped, rather like submarine canyons), mature river valleys (broad, with gentle bounding slopes), glacial valleys (U-shaped) and ancient lakes. In any case, the hallmark of an expert contourer is this natural appearance in his contours. The frustration of the novice contourer is that the experts react to his maps with a "yuk," while not being able to say exactly what is wrong--other than that the contours are just not natural. We now return to the subsurface, to remind ourselves of the folding of concordant layers and of its expression by contours. In Figure 5 (the block diagram)
Figure 5
and Figure 6 (a contour map on the base of the unit) we see a simple symmetrical anticline and its representation by contours.
Figure 6
If the fold is not symmetrical the contours are more closely spaced on one side than on the other. In Figure 7 and Figure 8 we see the corresponding simple syncline.
Figure 7
Figure 8
Figure 9 and Figure 10 depict the contour expression of a plunging fold.
Figure 9
Figure 10
Figure 11 illustrates a classical petroleum trap--a closed anticlinal dome, with two culminations, over the Keystone field in West Texas; the anticlinal axis is shown by a bold line, with arrows to highlight the dips.
Figure 11
We notice that almost everywhere on this map the contours appear natural. Figure 12 , Figure 13 and Figure 14 show what happens to the contours when a closed anticline, developed on southeast regional dip as in Figure 12 , is subjected to subsequent southeastward tilting, represented by Figure 13 . In Figure 14 the anticline has become a nose; in a structural sense, we have lost a prospect.
Figure 12
Figure 13
Figure 14
(Always conscious of velocity problems, we recall that a closed anticline can similarly be turned into a nose, or a nose into a closed anticline, by a lateral variation of velocity.) The delightful simplicity of the Keystone anticline in Figure 11 occurs only with gentle structures. When the folding is more intense, the stretching of the upper layers inevitably leads to faulting; this may take many forms, depending on the totality of the stresses involved. Most anticlinal traps, therefore, tend to look more like Figure 15 (the contours and faults of the Conroe field in Texas) -- basically a dome, but cut and displaced by many faults.
Figure 15
Indeed, in most exploration mapping it is not possible to separate the operation of contouring from the decisions of fault interpretation. THE RELEVANCE OF LINE SPACING In some forms of geophysical exploration, the measurements are usually made on the basis of a regular grid. Thus, a reconnaissance gravity survey on land might make one gravity measurement for each square kilometer ( Figure 1 ) . Three-dimensional seismic surveys allow a similar regularity, but on a much finer grid. Data of this type are ideally suited to contouring because the data points are regularly distributed.
Figure 1
Two-dimensional seismic surveys, aeromagnetic surveys, and marine surveys, on the other hand, are ordinarily conducted along lines. The data points are very closely spaced along the lines, but totally absent between the lines ( Figure 2 ). It is the function of the contouring to fill in the blanks. There is a balance to be sought between line spacing and contouring confidence.
Figure 2
In choosing line spacing, there is the risk that widely spaced lines may totally miss a prospective feature, such as a reef. Now we add an extra consideration. Closely spaced lines are expensive, but allow confidence in the contours; widely spaced lines introduce ambiguity into the contouring between them. Even if this ambiguity does not involve missing a prospective feature altogether, it may mean that a critical spill point is not recognized, or that closure is drawn where none exists. Figure 3 ,
Figure 3
Figure 4 and Figure 5 show four lines,
Figure 4
on each of which the picked reflection is flat.
Figure 5
We have no way of knowing, from these data alone, whether the area between the lines is flat ( Figure 3 ), or anticlinal ( Figure 4 ), or synclinal ( Figure 5 ). If this ambiguity is immaterial to the interpretation, then we merely note the ambiguity on the map, and pass on. But if the area is on trend with a structural high, and is large enough to be of economic interest, then we record the possibility of a prospect, and call for an infill line. Figure 6 ,
Figure 6
Figure 7 and Figure 8 illustrate a different ambiguity.
Figure 7
Two lines show a high, and two a low.
Figure 8
We have no way of knowing, from Figure 6 , alone, whether the area between the lines is an east-west syncline ( Figure 7 ) or a north-south anticline ( Figure 8 ) . However, in this example (unlike the last) we must make a decision; our contours must cross the area one way or the other. If the region has an east-west structural grain we accept the syncline. Otherwise, we record the likelihood of a prospect, and again call for an infill line. In Figure 9 (an east-west line confirming some kind of closure) we assume that the infill (Line 5) confirms the anticline.
Figure 9
Now the ambiguity is between the optimistic contouring of Figure 10 and the less optimistic version of Figure 11 .
Figure 10
Perhaps the economics are such that either is attractive; then we need do no more.
Figure 11
But if Figure 11 would be uneconomic, we need more infill lines. Thus in Figure 12 (after another infill line.
Figure 12
This one confirms closure in the north, but another line between 4 and 5 would be necessary to confirm the closure) Line 6 removes all risk of the northerly saddle, but leaves the possibility of the southerly saddle. So the recognition of trends is important; it can save us in-fill lines, or it can increase our confidence if we must proceed to a map without the benefit of infill lines. The trends (of folds, or flexures, or faults) may be established by surface geology, or the study of satellite images, or from gravity or magnetic data, or from reasoning based on knowledge of the tectonic history or the depositional history. A good interpreter searches all these sources for evidence of trends before he starts to contour. And he contours the unambiguous parts of his map first, to extract all possible trend information, before he makes a decision in areas of ambiguity . Figure 13 and Figure 14 ,
Figure 13
however, remind us that the definition of trends may itself be subject to ambiguity.
Figure 14
Both maps honor the same data on the two lines. Both maps show the contour expression of a series of folds. But the trend of the fold axes is quite different, and from these two lines alone we cannot choose between them. This is one more example of the phenomenon of aliasing, which we have encountered many times and in many contexts throughout this series of modules. What should we do in the situation of Figure 13 and Figure 14 ? If we can resolve the ambiguity by some other indication of trend, well and good. If not, we should mark on the individual lines the highs and the lows, and either leave the area blank or suggest one of the possible interpretations by lightly dashed contours. In either case we should annotate the map with a brief description of the uncertainty. We remember that it is an important duty of the interpreter to preserve some measure of the risk--in this case the contouring uncertainty--on his map. Management must be told clearly whether our contours represent an interpretation or merely a hypothesis. CONTOURING BY HAND In present practice, picking is usually done (or at least guided) by hand, because the interpretive decisions made during picking have not yet been formalized to the extent that they can be programmed. The contouring also incorporates interpretive decisions--such as those based on geological likelihoods of shape or on dimly perceived trends. Although the contouring process can be mechanized, many interpreters therefore prefer to contour by hand in any situation where these interpretive decisions are critical. The operations of timing and posting, however, are not interpretive; they can (and should) be mechanized. Picking: Basic Concepts Picking a Geologically Meaningful Surface Picking is the operation of marking the seismic section to identify the position of a reflecting surface. In practice, it may appear to the novice merely the marking of a zone of reflection continuity--the trace-to-trace alignment of reflection pulses. But it is far more than this. Indeed, the distinction is the reason that picking is still done (or at least guided) by hand; the machine can easily identify zones of reflection continuity, or the trace-to-trace alignment of reflection pulses, but that is not enough. What the human interpreter brings is a visualization of the pick, in the mind's eye, as a geological surface. This is most important, and it cannot be overemphasized; as we pick, we are continually making interpretive decisions based on our vision of the pick as a real geological surface. In most cases, this surface is either an unconformity surface (a one-time surface of erosion or nondeposition) or a bedding-plane surface (a one-time surface of deposition). Figure 1 ,
Figure 1
Figure 2 ,
Figure 2
Figure 3 ,
Figure 3
Figure 4 ,
Figure 4
Figure 5 and Figure 6 remind us that the seismic expression of these surfaces may not be simple.
Figure 5
In Figure 1 we see ancient rocks uplifted and subjected to subaerial erosion.
Figure 6
As the erosion proceeds, a pod of sediments ( Figure 2 ) is formed, including much coarse material to the right. Perhaps while this is forming, a rise of relative sea level allows marine sediments to onlap the pod, as at the left; these are probably fine-grained to the extreme left, but coarser near the coastline at the point of onlap. The marine transgression continues, as does the erosion of the land mass to the right ( Figure 3 ). Thereafter, a period in which few coarse-grained sediments are available yields a rather uniform layer of clays ( Figure 4 ). Thereafter, perhaps, a blanket of carbonate rocks forms overall ( Figure 5 ). The seismic response to the present-day succession might appear as in Figure 6 . The reflection from the top of the carbonate layer (aa) is probably positive, strong and continuous; it comes from a boundary of rock type, and may well be a bedding-plane surface also. The reflection may be picked almost mechanically, and it represents an excellent surface for contouring. The reflections from the steeply dipping rocks (such as gg) below the major unconformity (bb) may be of varying polarity and strength, but they are truncated by the unconformity. It would be a crass picking error not to recognize this, and to continue the positive pick gg up the unconformity surface bb to the right. The unconformity surface itself (bb-bb-bb) may be rough, but it is probably regionally continuous. However, the reflection that corresponds to it is discontinuous--fragmented, of changing strength, and perhaps of changing polarity. The pick we make is defined more by the termination of the subcropping reflections than by forcing the continuity of a particular peak or trough ( Figure 7 , for example). However, when the pick has been made, the regional nature of the unconformity makes it a good surface for contouring.
Figure 7
At each point on the unconformity surface, the reflection pick represents the surface of the solid earth at the moment of burial; it certainly does not represent a bedding plane. The reflection from the base cc of the carbonate layer is probably negative, strong, and continuous until it onlaps the unconformity bb. It would be an error of picking to fail to notice the onlap, and to pick cc continuous with the extreme right of bb. Easy to do, but wrong. The pod of sediments between the unconformity bb-bb-bb and the unconformity dd-dd-dd is a purely local feature; the reflection dd-dd-dd must be recognized as such. At its low end it terminates by downlap on to bb. Near its high end it terminates by onlap on to bb. But at the high end itself the unconformity dd becomes continuous with the unconformity bb; the lower part of the unconformity bb is truncated at the junction. It would be a picking error not to recognize the double nature of the apparently continuous unconformity dd- bb. The reflection dd-dd-dd itself is a function of the materials overlying and underlying it, both of which change spatially. Therefore it is possible for this unconformity to change strength dramatically--and even change polarity--as suggested in the sketch. In this case the pick we make is defined more by the termination of the onlapping reflections than by forcing the continuity of a particular peak or trough ( Figure 8 , for example) .
Figure 8
There are bedding-plane surfaces within the pod of sediments between dd and bb. They probably have the same sigmoid form; they may or may not be seen as seismic reflections. Finally, the interfaces ee and ff must be recognized to terminate by onlap against dd, and must not be forced further. Because of their marine nature they are probably continuous to the left, and probably bedding planes; however, the tendency for coarse-grained sediments to be concentrated near the coastline means that the reflections may change their strength progressively away from the onlap. The reflections may be strong near the onlap and weaken to nothing leftward (as suggested by the sketch), or the presence of carbonate material out to sea could cause the reverse. In either case, the fact that the reflection disappears does not mean that the bedding plane terminates--merely that there is no contrast of rock properties across it. From all of this, we see that the object of picking is not to force continuity relentlessly across the section. Sometimes, it is true, we do force the continuity through, where we can see that the continuity is geologically plausible and that the problem is a local area of noise. But in general we do not demand continuity; in our picking we accept that reflections terminate, and we search for the geologic meaning in these terminations. Specifically, reflections terminate because of truncation by an unconformity, because of onlap, downlap and toplap, and because of a progressive reduction in the contrast of rock properties across an interface. The first two of these signify the termination of a bedding-plane surface; the third does not. These considerations lead to a general rule: we prefer to start our picking deep in the basin, and work toward the basin margin, which is ordinarily updip. If the reflection we are following splits into two, thus creating an ambiguity, we keep low. Looking again at Figure 6 , we see first that this gives the right answer for the deep unconformity pick bb-bb-bb, working from the left. Next, coming higher in the section, we start a new unconformity pick dd at the point of downlap on to bb, and trace it to its termination against bb updip. Then we progressively pick ff, ee, and cc to their points of onlap. However, the rule of keeping low, while a good one, is no substitute for geological thinking. For example, the pick gg must never be regarded as continuous with the pick bb; we must recognize the termination of gg by truncation. To aid in the recognition of terminations, many interpreters start the picking exercise by marking on the section all important points of toplap and truncation (by the arrow ) ( Figure 9 (a)) and onlap and downlap (by the arrow ) ( Figure 9 (b)).
Figure 9
Figure 10 is a section annotated in this way before picking.
Figure 10
The practice has several benefits: it gives a general idea of the geological history represented by the section; it identifies the unconformities necessary for seismicstratigraphic purposes; it identifies the places where mappable surfaces terminate; it identifies the places where the character of the reflection below the lap-out may be expected to change (since different materials are now brought into contact across this interface) . The last point reminds us of the sensitivity of our picks to character changes caused by interference between closely spaced reflections. In Figure 11 we see a single flat continuous surface aa; the fan cc downlaps on to it, and subsequent marine deposition bb onlaps the fan.
Figure 11
Inevitably, both the amplitude and the character of the reflection aa must change across the point of downlap; in turn, the change of character introduces a change of time in the peak or trough we are following, and so causes spurious deflections of the contours on the contour map. Therefore, in any case where a few milliseconds can be significant, it is important to mark the onlap and downlap arrows on the contour map as well as on the section--as a warning that a spurious step has been introduced in the map at this point. Picking: General Rules For contouring to be valid, the picking must be done with the contouring in mind, and this leads to the following rules. 1. The pick must represent a geologically meaningful surface. This means that we must be able to see a pick as one of the following. A bedding plane, across which a contrast of acoustic impedance has been formed by a change of sediment type, a change of sediment supply, or a change of relative sea level. All points on such a surface were receiving sediment at the same time; we remember that the surface is called time- stratigraphic (or chronostratigraphic). A rock-stratigraphic (or lithostratigraphic) surface, in which a highly particular set of circumstances has formed a rock contrast that cuts across the bedding planes. Usually this contrast is actually a transition, generating a recognizably low-frequency reflection above an unconformity surface. However, diagenetic processes (leading to different cementation in different zones of the rock) cab lead to abrupt contrasts and discrete reflections. A hiatus or gap in deposition, caused by a break in sediment supply or a change in relative sea level or currents. A hiatus increases the chances of a significant rock contrast; today there is increasing evidence that most strong reflections, even in conformable sequences, represent such gaps in the deposition of sediments. An erosional unconformity as it exists at present, within the earth. It is important to remember that the constraints on the reasonableness of this surface are not those constraining an exposed surface undergoing erosion today; within the earth an unconformity was buried in one place, while still being eroded in another. At no time did the complete unconformity surface seen on a seismic section ever exist as a continuous surface. Thus, in picking, we are not just marking a pulse-to-pulse alignment; we are choosing a mappable surface. 2. Each of these surfaces, according to its type, must terminate along some line. If we fail to recognize this termination, or force a pick across it, the surface we contour is no longer geologically meaningful, and considerations of reasonableness are no longer available to guide the contouring. 3. Wherever possible, we start the picking deep in the basin, and work updip; we are picking in the direction of onlap. Of course, the surface we are following may rise to a high and then reverse; then we are picking against the direction of onlap, and the general picking rule is to stay low. This is illustrated in Figure 1 ,
Figure 1
Figure 2 ,
Figure 2
Figure 3 and Figure 4 .
Figure 3
However, we must remember at all times that such rules are no more than convenient simplifications; the real rule is that the surface we follow must be a geologically meaningful surface.
Figure 4
Thus, in Figure 5 ,
Figure 5
Figure 6 ,
Figure 6
Figure 7
Figure 7
and Figure 8 the geologically meaningful surface is the unconformity; it would be quite wrong to follow the pick down a truncated interface merely because someone had told us to stay low.
Figure 8
4. On sections known to have been brought to zero phase, reflections known to be positive must be picked on a white trough and reflections known to be negative must be picked on a black peak (for the SEG 1975 convention). In less clear situations, we may be driven to picking any peak, trough, or zero crossing that (a) is near the envelope maximum and (b) shows geologically plausible continuity. 5. Whenever changes of character are observed along a picked reflection, those changes are probably due to interference; all we can do is to stay on the "same" peak or trough, but we must recognize that we are not staying on the "same" time-stratigraphic surface. Therefore we are careful to make some annotation on the map (even " CC" will do) to indicate this change of character. If a closed loop indicates a mistie, this is the first place to look for the explanation. 6. The tying of loops is an essential discipline, of course. Also essential is the meticulous tying of the seismic picks to the well control. For all the reasons given above, the continuous seismic picks between wells usually tie levels of the same geological age, except where unconformities intervene; if this is not so we must either understand why not, or be suspicious of the picks. Wherever possible, we refine the seismic ties to the well by constructing synthetic seismograms; if the seismic grid ties more than one well, we are careful to use the same variables in constructing the corresponding synthetics (unless, of course, the relevant seismic lines used different variables) Tying all the wells is critically important if we are to have confidence in the final contour map. 7. We must recognize that sometimes the amplitude of a reflection falls to zero. The same time-stratigraphic surface is still there, but the type or condition of the rocks has changed, and there is no acoustic contrast. Thus, in Figure 9 (an example in which we have to choose between phantoming the pick and inserting a fault) the temptation is to stay on reflection aa until it stops, and then to fault it up to bb.
Figure 9
This could be correct if there is other evidence for faulting above or below, but in the absence of such evidence the reflection must be "phantomed," following the generally conformable grain of reflections. 8. We also recognize that seismic artifacts (for example, poor statics, incorrect mutes, loss of stack fold, wrong stacking velocities, changes of source pulse, changes of surface) can cause loss of continuity in reflections. With the interactive workstation, we have some ability to determine whether an observed loss of continuity is caused by such artifacts or is real. But often some doubt remains. This leads us, again, to the same ultimate rule: if there is doubt on the section, note it on the map. When we come to the contouring, it may be of enormous significance to see part of a line marked "phantomed," or "forced through," or "could be a leg higher." 9. The picking operation can sometimes be confused by sideswipe from anomalous features (faults, steep structure, salt domes) off the side of the line. Often, sideswipe events identify themselves by being geologically improbable, and we do our best to pick through them. In other instances the events are more difficult to identify, and inevitably lead to picking errors; then we hope that the errors are revealed and located by tying around loops. In general, if conflicting dips lead us to suspect sideswipe, we defer the picking in that part of the line until all the lines have been worked, and the major features seen on other lines have been marked on the map; then the source of the sideswipe can usually be identified. With the location (and, possibly, the orientation) of the sideswipe source established, it is easier to recognize and discard its effects on nearby sections. The Location Map Figure 1 shows a location map in which all the lines were shot during one survey, with line numbers less than 100 reserved for dip lines and line numbers greater than 100 reserved for strike lines.
Figure 1
From the scale, we can see that the line spacing is appropriate to a large feature. More typically, the location map would include lines shot in several different surveys, with the line numbers coded to show the year. This is likely to reveal the history of the prospect: early reconnaissance lines (perhaps at 5-10 km spacing on a regular north-and-east grid), followed by semidetail lines (perhaps at 0.25-1 km spacing in the same directions) and a line to tie a well, followed by individual short lines oriented to resolve specific zones of uncertainty in the previous interpretation. For preference, the source-points from different surveys should be plotted with different symbols; this aids the reading of the map where lines cross. If different symbols are not used, or lines of the same survey intersect at a very acute angle, or a line is caused to deviate across others by a fishing boat, the symbols may be supplemented to show the line direction. Source-points are usually shown every 10 or 20, and numbered every 50 or 100. Posting errors are minimized if the system of marking and numbering is the same on the map and on the section. The scale of the map is typically 1:100,000 (that is, 1 cm to 1 km) for reconnaissance work, 1:50,000 (2 cm to 1 km) for semidetail work, and 1:25,000 (4 cm to 1 km) for prospect delineation. It is an advantage if sections and maps are displayed at the same scale. Thus, detail sections are often made at a vertical scale of 10 cm/s and a horizontal scale of 1:25,000; the section may be folded horizontally along the time origin, and the fold laid with exact correspondence along the line on the map. (In the U.S. onshore, where topographic maps are usually plotted at 1 in. to 2000 ft, the scales for both location map and section become 1:24,000.) Transferring the Features The first thing we must do in making a structure map is to transfer from the sections to the map all the highs and all the lows (more strictly, all the dip reversals), and all the significant faults. This is not an option but a necessity; it is our only protection if the frequency of dip reversals or faults causes us to violate the sampling theorem. Then we transfer the arrows signifying truncation, toplap, and baselap. We also annotate zones of near-constant dip, zero dip, loss of the reflection, change of character, phantom picks, and any recognizable characteristic that is likely to aid correlation from line to line. Figure 1 is a simple example of this sort; a typical prospect would have much more annotation.
Figure 1
Misties The operation of picking around a closed loop involves folding one section vertically at the line intersection, overlaying it on the next section at the right place, and continuing the pick from one section to the other. For this purpose, we are not fazed by a mistie if we can see that we are indeed on the "same" reflection. However, we can no longer be so tolerant when we are preparing to make a contour map. Obviously, errors in the contours must arise if we ignore the mistie, or average it near the line intersections. The only truly sound course is to solve the mistie. To do this we must learn to recognize the types of mistie. While we can hope to eliminate misties from unmigrated sections, a dip line and a strike line must always mistie on migrated sections. Therefore we look first at the sources of mistie on unmigrated sections. On unmigrated sections from the same survey, the source, the field settings, and the processing are likely to be uniform. In this case there should be no systematic misties. The sources of unsystematic misties can then be identified as follows. At sea, positioning remains the most likely culprit. Before we blame the navigation as such, however, we must check that the setback has been properly computed and applied. In tidal waters, where significant feathering is likely on some lines, we also need to check whether the antenna, the near group, or the common midpoint has been kept on the programmed line. If these are the source of the trouble, the maps or the setback must be corrected before we go any further. If they are not, the standard procedure is to test empirically whether variations in the line intersection yield a better tie. At the interactive workstation it is easy to move the line intersection on both lines simultaneously (manually, it would be done by trying several folds on one section, sliding the fold backward and forward across the nominal point of intersection on the other section). Because the degree of mistie introduced by a navigation error is a function of the dip, what we are searching for is a revised line intersection that eliminates the mistie on all reflections--particularly the dipping ones. Where the evidence for a revised line intersection is clear, the map must be changed (and annotated to show that it has been changed). Where it is less clear, we usually make some compromise adjustment, acknowledging the physical realities of boat speed and feathering, and the significance of the change on the final contours. Misties appearing to be unsystematic can also be caused by what is actually a systematic error between the positioning of two or more surveys. This can occur, for example, where two base stations of an early survey were on the mainland (perhaps in different countries, with different geodetic datums) and the third was on an island that, at the time of the survey, did not have a good geodetic tie to the mainland. Or a subsequent survey may use different positions for the base stations, with different path lengths over land; again the two surveys may be systematically offset from each other. The best approach to this problem is to go through all the lines of the most recent survey, marking on the map every recognizable feature of the geology (highs, lows, flexures, and particularly faults), and then to do the same, on transparent paper, for all the lines of the older survey; then we can move one map over the other until we get the best fit of the geological features. Normally we would then replot the older survey to accord with the positioning of the more recent survey. On land, the most likely culprit for unsystematic misties is static corrections. The mistie is then the same at all reflection times. Of course, datum corrections calculated by deterministic means should tie precisely at the line intersections; however, those calculated from the reflection data themselves (including all forms of autostatics) may not do so. This is because of certain mathematical indeterminacies in the statics solution (often called "long-wavelength" statics). A good autostatics program yields a section after auto-statics that is a best fit (in a least-squares sense) to the section before autostatics, but this in itself does not guarantee that the fit is actually the same at the line intersections. The interpreter is then left to "fudge" the mistie as best he can. Typically, he adopts the average reflection-time value at the line intersection, and then distributes half the mistie over a reasonable distance each side of the intersection, on each line. Can we improve on this rough-and-ready approach? First, we can use our understanding of the phenomenon to tell us over what distance we should do the distribution; this is half a spread length (or, more properly, half the length of that portion of the spread within the mute line over the window used in the statics solution). Second, the processors can constrain the statics solution to tie at all the line intersections. Although there is still no guarantee that the statics solution is actually correct, this does eliminate the misties as an obstacle to contouring. In any situation where we eliminate the misties introduced by an autostatics program--whether by machine or by hand--we must remember that the resulting contours, although smooth, are not right. This tells us that, whenever we "fudge" a mistie, we should add a note "M" at that intersection; this will remind us, when we come to read meaning into the contour map, to have less trust in the map near that intersection. In working on a mistie problem caused by statics, we sometimes find that it helps to plot the misties on a map; this may reveal an areal pattern, suggesting that the assumptions of the datum- correction technique are not appropriate everywhere. Another culprit causing unsystematic misties is obviously noise. If the tying reflection on one or both sections is seen to be noisy, we just do whatever reasonable smoothing is necessary to remove the mistie. If one of the sections is noisier than the other, we concentrate the mistie correction on the noisy section. Although we are here concerned with unsystematic misties, and although processing variations generally introduce systematic misties, we should note that processing variations from line to line, within a survey, can yield misties that appear unsystematic. The most common culprits are changes in the mute pattern from line to line and discordant stacking velocities. Time-variant statistical deconvolution programs can also cause apparently unsystematic misties. This leads us to a discussion of systematic misties--those where all reflections of one survey have a constant mistie with the corresponding reflections on another survey; the mistie may be of time, or character, or both. A constant mistie of time (for example, where one survey is uniformly 10 ms earlier than another) suggests a simple difference of datum correction. For example, one marine survey may be properly corrected to a sea-level datum, whereas another (particularly an old one) may not be corrected for source and streamer depth. Or one land survey may use one elevation velocity above one datum, while another may use a different elevation velocity or a different datum, or both. All these differences should be apparent from the section labels. Another simple source of constant mistie, which should also be apparent from the section labels, is the display polarity. The 1975 SEG polarity standard requires that a compression in the water (or an upward motion of the geophone case on land) should produce a white trough on the section. This standard may need modification for water guns and for Vibroseis, depending on the processing applied. Of course, not everybody agrees with the SEG convention; some hold, with good reason, that the dominant lobe of a positive reflection should be a black peak. All these considerations often mean that sections of different vintage, or from different companies, have opposite polarity. The correction, of course, is that a reflection picked as a peak on one set of sections must be picked as a trough on the others. It is not correct to keep the "same" picking point and to add or subtract a fixed time. We turn now to apparent misties caused by differences of reflection character, from one survey to another. This is a very common phenomenon; different surveys use different sources or different source depths, different geophones or different hydrophone depths, different instruments or different recording filters, and different processing. Figure 1 reminds us of the effect of these agencies.
Figure 1
The figure represents successive stages in the life of the reflection pulse, excluding those effects which occur in the body of the earth. The source, in this marine illustration, is a small air-gun array. After the command to shoot, at nominal zero time, there is a delay during which the air-gun valve is accelerated. Then we observe the outgoing pulse as it would be recorded with a near-field hydrophone (pulse 1). As this signal is transmitted downward, it is followed by the free-surface ghost (2); in the illustration the source is about 6 m deep, so the ghost (which is negative, of course) comes after 8 ms. At this stage we imagine that the pulse is turned back by a perfect reflector, and enters the hydro-phone array. It is followed by a new free-surface ghost, in this case corresponding to a streamer depth of 15 m. Pulse 3 is the notional reflection signal at the input to the recording instruments. The antialias filter in the instruments removes the highest frequencies, and has a marked effect on the pulse (4). Thereafter the minimum-phase deconvolution in the processing reshapes this pulse, and removes the vestigial bubble pulse (5). Finally, time-variant zero-phase cosmetic filtering is applied; the illustration shows the resultant pulse at representative shallow and deep levels ([6] and [7], respectively). Clearly, the dependence of the final reflection pulse on the intermediate stages is major--even without the inevitable contributions from the earth. Figure 2 shows the same sequence for a different source. Even without a change in the source depth (or anything else), the final reflection pulse is quite different.
Figure 2
Obviously, surveys shot with these two sources must yield a major mistie of reflection character; indeed, in this case, the change is so large that it could almost be taken as a reversal of polarity. Figure 3
Figure 3
abstracts the recorded and processed pulse shapes from Figure 1 and Figure 2 , and adds two more from other sources. If we were to add the effects of changing operating depths, different instruments and different processing, we would obtain an even wider range of resulting pulse shapes. The potential for misties is enormous. There are several approaches in dealing with this type of mistie. The first approach merely accepts the sections as they are, and attempts to find a simple time correction for each vintage of data, relative to one vintage (normally the most recent) taken as the reference. This can be reasonably effective, provided that the picks we make on all data are close to the envelope maximum. Indeed, this is one of the reasons for always picking near the envelope maximum. We can arrive at a figure for the time correction by looking at many line intersections at the level of reflections known or believed to be discrete (that is, for a single major contrast of acoustic impedance, without the complications of thin beds causing interference above or below). Then, on our chosen pick near the envelope maximum, we estimate an average figure for the mistie, and use that as the correction. As always, we use the convention that a positive correction increases the reflection time. The second approach involves reprocessing the older data to zero phase. In so doing, we may choose to check whether the bandwidth can also be improved by modern deconvolution, but this is not a necessary part of the approach. Our object is merely to measure whatever reflection spectrum exists on the stacked data, and thereby to derive and apply a phase-correction operator to bring the section to zero phase. This is straightforward and inexpensive if we have kept the stacked but unfiltered data, and all the processing applied up to this stage has been minimum phase, or close to minimum phase. With this done, we pick all reflections on the peak or trough actually at the envelope maximum, and the misties become much smaller. If they are still significant (for the type of structure being delineated), we again estimate a mistie correction as an average of the misties observed on discrete reflections. It is still important to work on discrete reflections; because the bandwidths on the different surveys remain different, the complexes produced by reflection interference have different character, and cannot tie leg for leg. Figure 4
Figure 4
and Figure 5 remind us of the effect: in Figure 4 only the substantially discrete reflection (the last one)
Figure 5
yields no mistie between different zero-phase reflection pulses; in Figure 5 we can see from the time variations that there are no effectively discrete reflections if the bandwidth is small and low- frequency. The third approach involves reprocessing of all the sections to have the same bandwidth, again with zero phase. The advantage of this is that the mistie corrections now become very small, and yield a full correction of all picks (including those on reflection complexes); only with this approach can we really talk about the "same" pick on different sections. The disadvantage, of course, is that the final bandwidth of all sections can be no larger than that of the worst; all other sections are degraded in resolution, to this smallest common bandwidth. This approach is therefore a technique of last resort, applicable only to structures whose relief is so small as to be jeopardized by the misties remaining with any other approach. In practice, mistie problems often have several causes. If the structures are large, we tend to do what can be done with a little thought, and to "fudge" the rest. Perhaps we get a discovery. Then we need to understand more of the detail, and the statics must be given more attention. In this situation it is not unusual to reprocess all of the surveys over the feature with a common (or at least harmonious) statics technique. The expenditure is justified by the discovery, and the interpreters can now map detail with much more confidence. Indeed, the number and severity of misties can be used as a quantitative measure of the confidence to be placed in the data. We turn now to the accommodation of misties on migrated sections. Our first note, obviously, has to be that all the sources of mistie present on unmigrated sections are there also on migrated sections. But now we add a new and entirely systematic type of mistie, characterized by a degree of mistie that increases with the difference of dip on the two sections. This type of mistie, then, is greatest between strike lines and steeply dipping dip lines. Provided that the migration has been properly done, with appropriate velocities, it is the dip line that is correct, and the strike line that misties. Unless we have reason to doubt the migration, therefore, we accept the picks on the dip lines; the question is what we do to the picks on the strike lines. There are several options for picking migrated and unmigrated sections, depending on the geological complexity. We can summarize these options as follows. First, if the geology is uncomplicated, we reaffirm that we prefer to pick the unmigrated sections. This avoids the problem of migration misties. Then we migrate the contour map derived from the unmigrated sections. Second, at an intermediate level of geological complexity, we continue to pick the unmigrated sections, but we use the migrated sections to guide the picking; in particular, we use the migrated sections to clarify the existence of the faults, and to verify their locations after the migration of the contour map. Third, in very complex or steep-dip areas, we are forced to work with the migrated sections alone. We use the unmigrated sections only to check the line ties, and so to verify that the misties on the migrated sections are solely the consequence of the migration. We pick the migrated sections in the normal manner on the dip lines. We also pick the migrated sections normally on those strike lines where the dip is small (for example, on the crestal strike line) ; in these cases, of course, there are no migration misties. Then we have to tackle the strike lines on the flanks, where the migration misties are material. If the character ties are good, these strike lines at least confirm the picking thus far, and so allow the contouring to proceed on the dip lines. But to go beyond this, and actually to use the time values on the strike lines, we have two problems: to decide what times to plot on the map, and to decide where to plot them. Perhaps it helps to have an example before us. Figure 6 and Figure 7 illustrate how migration can make the difference between a dry hole and a producer.
Figure 6
Note that a well at about SP57 on the unmigrated section would be far downdip from the fault trap.
Figure 7
The migration moves the trap some 20 source-points up-dip; this is the distance h in the dip-line diagram of Figure 8 .
Figure 8
For a strike line to show the updip trap, the line intersection can be no farther updip than about SP57 in Figure 6 and Figure 7 . The actual reflection "points" on this strike line are approximately under SP37. This is illustrated graphically in Figure 9 (the inability of 2-D migration to correct the displacement on a strike line).
Figure 9
These illustrations give us one approximate solution to the problems posed above. We do not use the actual reflection times on the strike line, but only their variation; in effect, we force a tie on one dip line, plot the variations in time along the strike line, and hope (as we may) that this yields at least an approximate tie to the next dip line. Then these values are plotted, not on the actual location of the strike line, but updip by the amount h; in Figure 6 and Figure 7 , this would be at about SP37. To do this, one must have a measure of the distance h. In Figure 6 and Figure 7 this is in no doubt because the fault can be seen clearly on both migrated and unmigrated sections. In other cases some recognizable feature (a small fault, a thinning, a local loss of amplitude) can be seen on both sections, and similarly used to give a direct measure of the distance h. To do better than this, we must calculate the distance h, and calculate the sideways-migrated times on the strike line; this we can do, from either the migrated or unmigrated sections, using the equations derived in the appendix. Of course, care is necessary if there is geological complexity (splay faults, for example) along the strike line, or if the dip changes markedly on the dip line, or if the lines do not correspond fairly closely to the dip and strike directions. In a severe situation we must adopt a completely general approach, calculating a new position and new times for every line (or at least for those not substantially on dip). Figure 10 illustrates the process of constructing a track line, and the vertical projection of the reflection "point" to the surface.
Figure 10
Figure 11 gives the unmigrated time structure map for a uniformly dipping surface; the five lines show no misties.
Figure 11
Figure 12 shows what happens if we map picks from the migrated sections (and convert to depth) ; mistie "bumps" appear on the map at the line intersections.
Figure 12
Figure 13 displays the calculated vectors to move the actual line locations on the map to the track-line locations; all lines are moved except the two dip lines.
Figure 13
Each vector is accompanied by a revised time value (or in this case a revised depth value). Then Figure 14 shows that the misties have been removed; the dipping surface is now properly represented by the contours.
Figure 14
Let us see if we can summarize all the above conclusions about misties. 1. It is impossible to contour a mistie. In principle, misties must be solved before we attempt to contour. This conclusion needs a sense of proportion, of course. We would generally adopt a contour interval that would provide at least two contours (and preferably three or four) on the minimum closed structure of commercial significance; if the mistie is small relative to this contour interval we are at no risk of materially distorting the structure by smoothing over a mistie. But many of us work in areas of such low relief that the misties may be much larger than the structural closure; in this case we must solve the misties or get out of the game. 2. Misties are a pain, and solving them is often tedious and difficult. In this effort, it is important to be able to recognize the source or type of the mistie, using the following criteria. Misties due to mispicks must be identified first. This is done by calculating the net dip (in milliseconds) around each loop of the grid; on unmigrated sections the net dip should be zero. A nonzero value (a misclosure), clockwise in one loop and anticlockwise in the next, points to a picking error in the common side of the loop. Ideally, having found such a misclosure, we do not merely "distribute" the error along the common side, but search for (and try to understand) its cause; this may be a leg jump in a poor-data zone, or miscorrelation across a fault, or the use of phantoming in an unconformity situation where it was not legitimate, or a changing relation between the source pulse and the reflector complex. On migrated sections the net dip around a loop may not be zero, but is still worth checking; it should either be very small (if the dips are small) or explicable by the differences in dip at the line intersections. A mistie due to a positioning error at sea is greatest on the reflections with greatest dip. A mistie due to long-wavelength statics is constant down the section (except perhaps in the mute zone) . A mistie due to noise is likely to be different from reflection to reflection, and the noise is likely to be evident as such. A mistie due to frequency-selective effects (different sources, arrays, instrument settings, deconvolution, filtering) is associated with a difference of reflection character; a simple time shift is unlikely to be a perfect correction, but it may be acceptable if the structure is large, or if all the reflections picked are substantially discrete. A mistie due to migration is most likely between a strike line and a dip line showing a local steepening of the dip. Although this sometimes makes it difficult to distinguish between migration misties and navigation misties, there are characteristics of difference; for example, the reflection of a migration mistie is always early on the strike line, and the migration mistie disappears if both lines are at the same angle to the dip direction. 3. From the above, we conclude the following. Navigation misties are corrected by a local change of line location on the map, or possibly a systematic change of one survey relative to another. Long-wavelength-statics misties are corrected by a simple static time shift, usually shared between the two lines and tapered to zero over at least half a spread length from the intersection. Datum misties are corrected by a simple static time shift applied to a whole survey. Polarity misties are corrected by changing the pick from a peak to a trough (or the reverse) . Character misties should really be tackled by reprocessing all sections to zero phase, and in extreme cases to the same bandwidth; however, a simple static time correction applied to the whole survey is sometimes an acceptable substitute. Migration misties are corrected (to a first approximation) by forcing the strike lines to tie the dip lines, as discussed above, and by changing the mapped location of the strike lines to the track-line position; for a closer approximation, actual corrected times may be calculated for the strike line. In cases where there are many vintages of data, and many types of source, the misties may be so numerous and so large that individual solution of each mistie becomes a very daunting task. We may be driven to work solely with the time differences along each piece of line within a loop. If there is one place (in a good-data area, or at a well) where we trust the actual reflection time, we can then compute the corresponding time at every other place, using these time-difference segments. This makes us nervous, of course, as we get farther from control. However, the approach may still be the best solution-- provided (and only provided) that there are no misclosures of net dip around any of the loops. Digitizing Once we have corrected for misties, we can digitize the picks on the interpreted sections. From this we obtain, on tape or disc, a tabulation of reflection time as a function of horizontal distance. In times past, this rather mechanical chore had to be done by the interpreter, by hand; today it is done at the interactive workstation, or by a technician at a digitizing table. Where the interpreter does not do it alone, he or she must provide to the technician clear instructions defining the mistie corrections; for example, in the case of a long-wavelength-statics correction, the section must contain an indication of both the original pick (for comforts sake) and the modified pick, tapering the mistie correction in and out. If the digitizing is done by machine-assisted picking at the interactive terminal, we can specify the reflection completely by recording the picked time on every trace. Where it is done on a manual pick of a paper section, at a digitizing table, this would be a formidable task; usually the picked time is recorded for every 10th or 20th trace. This is acceptable, provided that we apply some judicious smoothing as we mark the pick, and make sure to transfer all abrupt features of the reflection directly to the map. There still remains the question, Should the sample interval be of horizontal distance (say, every 10th trace), or of time (say, every 10 ms)? Although the sampling in distance is the traditional way, the sampling in time has some advantages. Specifically, if we digitize in time at the contour interval, we eliminate the need for interpolation along the line. If we digitize in distance, we have to do this interpolation to find the positions at which the reflection pick crosses the contour; however, we note that interpolation is not strictly allowed unless we first reconstitute (which would be a significant additional chore). Accordingly, the practice recommended here is as follows. From our general familiarity with the area, gained from previous work and from picking the present prospect, we establish a feeling for the minimum structural relief that would be of commercial interest (or of geological interpretive value) . We choose the contour interval to give us at least two contours (and preferably three or four) on that minimum structural relief. For contouring by hand, we digitize the reflection picks at a sampling interval of time, at half the chosen contour interval. In our later visual analysis of the final contour maps, there is some advantage to having the same contour interval at all the levels mapped. In this case the instructions to the digitizer are constant. However, it is a frequent observation that the deeper levels have greater relief, and less clarity of detail; this may lead us to use a coarser contour interval (and hence a coarser digitizing interval) at deeper levels. We should acknowledge that this practice does not suit everybody. The important things are to understand the relevance of the sampling theorem, to understand the exploration problem, to be sure to transfer all the significant geological features to the map, and to sprinkle all this with common sense. Practical matters are important too, of course. As structures become smaller and smaller, we must take a corresponding degree of care with the digitization. In particular, we must be sure to check the calibration of the digitizer before and after each series of measurements; paper sections brought out of storage can change dimensions significantly as they lose or absorb moisture. Posting Posting is the process of annotating the location map with the picked reflection times, in preparation for contouring. The operation is entirely mechanical (except to the degree that we have to do any final adjustment of the line locations as a part of the mistie corrections) . Our only concern is that the plot program gives us an easy distinction between SP numbers and time values, and is smart enough not to overplot. Contouring Procedures Ordinarily we pick several or many reflections; we have to decide which reflection should be contoured first. Factors entering the decision are as follows. In general the shallowest reflectors are the least complicated, showing faults and structures that are comparatively young. It is a marked advantage to contour these first, even if they are of no commercial interest; we can establish the structural trends and fault directions for the youngest deformation, and be able to separate these out when we come to the deeper and more complex geology. At the other extreme, an interpretation must sometimes start with the basement; only when the fault and other trends in the basement are clear can we fully understand the later structure as adjustments and rejuvenation of patterns in the basement. In studying the intermediate stages, the key horizons are often the major unconformities. Unconformities also often represent the break between two diverse systems of faulting. All of this must be tempered with considerations of reflection clarity, quality, and continuity. If there is a clear "marker" reflection from a major shale-limestone interface, widespread over the whole basin, we surely contour this at an early stage; confidence in the data and confidence in the pick are always very important. If we thereby obtain one contour map in which we have full confidence, we can use it--and our understanding of the regional tectonics--to bias the contouring of another geologically related horizon. So, with the contour interval decided as above, and the decision made for the first reflection to be contoured, we start to contour. In so doing, we accept from the beginning that almost everything we do is likely to be modified later. As always, we start where the reflection is clear, the pick beyond doubt, and the geological complexity a minimum. As with the picking, this often means that we start in the lows, or way down on the flanks. Some interpreters start with the coarse contours (perhaps every 100 ms) , and trace these around to get the gross picture. Others start by drawing in just the easy and obvious segments of contours that are clear, whatever their value; this is probably safer if the regional trends are not yet known. If the digitizing has been done at regular intervals of distance, the first step is to interpolate the positions where the contours cross the lines ( Figure 1 (a)).
Figure 1
Strictly speaking, this requires reconstitution before interpolation; however, the traditional practice is merely to interpolate linearly, as in the figure. If the digitizing has been done at the contour interval (or, better, at half the contour interval), this step is not necessary ( Figure 1 (b)). The second step is to sketch in the contours across the line intersections, where the control points are closely spaced ( Figure 2 ).
Figure 2
As the control points become more distant, away from the intersection, we are less confident; perhaps, for the moment, we dash these contours. We pay particular attention to lines that cross acutely ( Figure 3 ), since these define some local contours with more certainty.
Figure 3
The third step is to search for any contours that might justifiably be closed around the highs or lows previously marked on the map. These we sketch in very tentatively indeed. We are particularly interested, of course, in any suggestion that these highs or lows align in a trend. The fourth step involves our first real attempt to fill in the contours between spaced lines. On a flank this is straightforward; we are basically interpolating linearly within the space between the lines ( Figure 4 ).
Figure 4
Thus the value we ascribe at the point x is some reasonable interpolation between all the lettered points of control around it . . . between a and h, b and i, d and k, f and l, and so on. In particular, we note that wherever we regard a previously drawn contour as safe, we use that in the overall interpolation; thus X is also an interpolation between m and g. The rule, clearly, is that the value ascribed at any point between the seismic lines takes into account the control points and the established contours in all directions, with the greatest weight given to the nearest control and the safest control. In situations more complex than a flank, we follow the same rule, but with allowance for the highs and lows. Thus, in Figure 5 and Figure 6 we start by noticing the high to the southeast; clearly the high closes (through control points g,
Figure 5
h, i, j), and has its apex within the grid of lines.
Figure 6
Now we note the weaker high to the northwest, and hope that the two highs define a trend. Then we have to decide whether f connects to k and a to g (as in Figure 5 ), or f to a and k to q (as in Figure 6 ). Everything hinges on the control points r, S, and t (in particular on t). We would certainly go back and check the picks at these points if the extent of the closed area is critical to the prospect; in Figure 6 the point x is probably well within a large closed area, but in Figure 5 it could be a spill point. With the critical pick checked, we make our decision between these two choices. If the choice is Figure 6 , we note that contours nm, p1, be, and cd can be left straight, as we probably drew them originally. But if the choice is Figure 5 , we think for a moment about geologically meaningful surfaces, and reach for the eraser. Having accepted a saddle between two highs, we now have to bow the contours p1 and be in toward the saddle. It is also likely that the outer contours nm and cd bow in somewhat, although possibly not as much as p1 and be. In Figure 5 , then, we have the contour signature of a saddle, as a high crossed by a low; two pairs of contour lines approach the saddle, and turn away again. In Figure 7 we add the actual contour values, and note a useful check: at the edges of any contour map, successive contours must either define a slope (so that they differ by one interval) or define a high or low (so that two are the same).
Figure 7
Adjacent contours at the edge of the map cannot be 720 and 740, and there cannot be just one 720 between two 730s. We resolve always to make this check before we fair-draw any contour map. In fact, of course, we typically have many more lines than the four shown in Figure 7 . So we continue the contouring--perhaps doing one clear part completely, perhaps sketching only a few contours in another part, until the overall picture emerges. Then, as we focus on the areas of doubt or difficulty, we check whether the half-interval values (which we digitized but have not yet posted or used) can resolve any of these local problems. Often they can; when they have clarified the difficulty, we erase the half-interval contours and leave the map looking normal. As the map takes shape, we start to tidy the contours. For example, we check that the contours are perpendicular to any anticlinal or synclinal axis that they cross, and generally parallel to such axes over the broadest part of any high or low. We check whether the emerging contours shed new light on a mistie problem, or indicate that a zone of doubtful picking needs review. If this is not the first horizon to be contoured, we overlay the map on the maps of other geologically related horizons, and check the relation of the two maps for geological plausibility. And we start to smooth the contours, wherever the sections allow it, to remove effects like the serpentine contours of Figure 8 ; although it is true that there can be detail in the contours near the seismic lines, and much less detail between the lines, the inclusion of all the detail along the lines gives an unnatural appearance.
Figure 8
However, this is a judgment; we preserve carefully all indications of significant reentrants or minor flexures that guide the correlations from line to line. We also review the contour spacings. A contour map does not look natural if the majority of the contours are uniformly spaced, as may happen if the interpolations do not take into account control points in all directions. Thus, the even circular contours of Figure 9 (a) are rare in nature,
Figure 9
being confined to features such as salt domes; as soon as there is any elongation (as with all systems of folding) the contours are no longer "parallel" ( Figure 9 (b)). With these checks and modifications made, and the final contours roughed in, we can start the fair-drawing. Every interpreter finds his or her own way. If any advice can be given, it is to draw the curves in segments, with the wrist or elbow positioned beforehand to provide the appropriate radius of curvature, and with the fingers used only to provide small modifications of the radius. A well-drawn contour map can be a delight to the eye. Usually we accentuate every fourth or fifth contour, as appropriate, to facilitate the following of trends across the map. Then we label each contour, or each bold contour as appropriate, at suitable places across the map and at the edges of the map. On a hand-drawn map it is easy to place these labels exactly where they are most needed in the assessment of the map, and to ensure that all labels are easily read when the map is viewed from the south. The assessment of the map by others can also be facilitated by adding symbols to distinguish highs, lows, ridges, valleys, and noses, according to some accepted scheme. Color is also very helpful. Even more important in the assessment of the map is the preservation on the map of indications of where it is certain, and where (and in what way) it is in doubt. Some Finer Points of Contouring Wherever it is possible, we prefer to map all the faults before we start contouring. This requires that we establish all the fault correlations from line to line before we start to detail the structural picture. The checks that aid us in this include the following: the angle (or hade) of the fault, the direction of the throw, the amount of the throw, the presence of structures associated with the fault (for example, rollover structures on the downthrown side of a growth fault [ Figure 1 ], or structures created above a rejuvenated fault) ,
Figure 1
the timing of the initiation of faulting (indicated by the shallowest reflection to show the maximum throw), and the timing of the end of fault movement (indicated by the deepest reflection that does not show abrupt displacement). In correlating faults from line to line, therefore, we demand that any variations in these indicators should be geologically reasonable. The "same" fault is unlikely to have wild variations in throw along its length, and cannot have wildly different age along its length. Therefore, when we transfer the fault indications to the map, it often helps to annotate them with the age of initiation, the age of termination, and the throw in milliseconds. Such means usually allow us to distinguish between a confusion of options in fault correlation, such as those indicated in Figure 2 .
Figure 2
The throw behavior of a fault is of particular importance in the relation between faults and contours. Thus, by asking the digitizer to record the reflection time on both sides of each fault we can ensure that the contours honor the fault throw on the lines themselves; then we must ensure that the contours also reflect a reasonable variation in throw between the lines. We can also invoke reasonable variations in the throw when a fault splays into two; in Figure 3 the sum of the throws bb and cc must approximate that of aa at the point of splay (if the faults are of the same age), and is likely to preserve this relation, approximately, for some distance.
Figure 3
Considerations such as this would influence our acceptance (or rejection) of the second interpretation of Figure 2 . The relation between faults and contours must also take into account the age of the faulting and the age of the folding. Thus, in the sectional display of Figure 4
Figure 4
it is clear that the deep folding is older than the faulting; therefore the contours on the deepest reflection must be broadly harmonious across the fault ( Figure 5 ). Obviously this would not be generally true where the structure has developed since the faulting, or as a consequence of the faulting.
Figure 5
Most of the foregoing discussion has been concerned with contouring the type of surface produced by folding. We must also consider the type of surface produced by erosion. Figure 6 prepares us.
Figure 6
This is a combination contour map, displaying to the left the tilted time-stratigraphic surface of the top reservoir, and to the right the result of the erosion of the reservoir. The two systems of contours, although harmonious where they meet, are causally unrelated; the total change of character is clear from the figure. Both systems of contours are necessary to define a drilling location. Clearly, the criterion of the reasonableness of a system of contours is different for time-stratigraphic and erosional surfaces. Nowhere could this be more important than where the erosional surface involves channeling, and where the channeling provides either the reservoir or the lateral seal. In Figure 7 we visualize a square grid of lines, on which the reflection times to the eroded surface have been posted at equal increments of horizontal distance.
Figure 7
The surface is generally rough, but is particularly noteworthy for local depressions at c, u and t. It is of the utmost importance that we recognize the possibility of a channel when we are doing the picking, and consider its exploration potential, and we treat the channel as a feature to be contoured separately from the rest of the erosional surface. Thus the point x is emphatically not to be obtained by a general interpolation in all directions, as we did with timestratigraphic surfaces. If it can be interpolated at all, it is between points c, u and t; all other values around the grid are substantially irrelevant. Rules of Contuoring Hand Contouring: Procedure Summary We may summarize the rules of hand contouring as follows: 1. The rules are different for different types of geological surfaces. Therefore the first essential is that the type of surface should be recognized before picking, and picked accordingly. We must be able to assume, as we contour, that the surface we are contouring is a real geological surface. 2. Before posting the picked values, we correct the misties as far as we are able. 3. We select the first surface to be contoured; this may be on considerations of geological simplicity, or interpretational significance, or reflection quality. 4. For the selected surface, we establish whether it is a time-stratigraphic surface, a rock-stratigraphic surface, a subaerial erosion surface, or a submarine erosion surface; if it is a time-stratigraphic surface, whether it occurs within a generally conformable sequence, and whether this sequence contains other mappable surfaces that may be useful in the contouring; the age of the folding; whether faulting is present in the surface; and the age of the faulting. 5. We transfer to the map, before attempting to contour, all recognizable features that may aid the recognition of trends and the correlations from line to line. Specifically these include the highs, the lows, the sustained monoclines, the breaks in slope, the faults (with indication of the direction and amount of throw), the reflection limits and truncations, any recognizable erosional features, and the toplap and baselap of confluent reflections. 6. In general, we prefer to work from dense data and simple geology toward the sparse data and the complications. 7. Where the surface is a folded but unfaulted timestratigraphic surface, we are looking for regional dip interrupted by local reversals, and for any trends evinced by these local anomalies. Therefore we try to sketch in the contours of the simple regional dip first; then we tackle the highs and lows, and see whether we can find (but not force) structural axes. 8. On a folded time-stratigraphic or rock-stratigraphic surface, the position of a contour between seismic lines is found by a general interpolation in all directions, taking into account the certainty of the control, the distance to the control, and the existence of other established contours. 9. As the contours emerge, we review any remaining or enigmatic misties; we also check the possibility of mispicking due to sideswipe from off-line features revealed by the contouring. 10. We mark on the map any zones of doubt or ambiguity, with a note to remind us of the nature of the uncertainty. 11. In building the contour map, we apply the constraints appropriate to the type of geological surface. Thus for a folded time-stratigraphic or rock-stratigraphic surface, we check for alternate and preferred contouring if we see any of the following: a closed high within a low (except for peripheral sinks around piercements), a closed low on top of a high (unless it is seen to be a collapse feature), a low trend directly aligned with a high, like contours running parallel over a considerable distance. 12. For other reflections within the same generally conformable sequence, or subject to the same folding, we use the contour map established on one surface to aid the contouring of the others. This, of course, may require modifications to the first mapping. 13. As a contour map nears completion, we review it for the following: perpendicularity between contours and structural axes where they cross; "naturalness"; each contour must be capable of being traced across the map, from an "in" point to an "out" point; there must be no crossing contours, no loose ends; at the edge of the map, adjacent contours must be either successive (defining a slope) or the same (defining a high or low); at every high or low there must be a pair of identical contours; the contours must depart from simple uniform spacing where they are required to do so by the data, or by considerations of "naturalness," but they should not do so merely as a caprice. 14. We ensure that the contour labeling is sufficient, and easy to read from the south. 15. We add supplementary symbols (highs, lows, noses) to highlight the exploration message of the map. 16. If the surface is a time-stratigraphic or rockstratigraphic surface affected by both folding and faulting, we try to correlate the faults from line to line before attempting any contouring; in this we use the fault type, the direction and amount of throw, and the timing of the initiation and completion of faulting. 17. Where reliable fault correlation by these means is not possible, we try to do the fault correlations and the contouring together, using each to narrow the possibilities for the other; in this we take account of likely drag and rebound effects near a fault. 18. If the faulting is seen to have occurred after the folding, we harmonize the pattern of the structural contours across the fault. 19. We may choose to contour a fault as a very steep dip, running each contour along the fault plane. 20. We check that every contour entering a fault also leaves it; if many contour lines enter and leave the fault they must always be in pairs. 21. We check that the spacing of contours entering and leaving a fault implies a geologically reasonable variation of the throw. 22. We check that the contours beyond the limits of a fault show the expected degree of flexure. 23. We make whatever use we can of the fact that faults do not just appear randomly in the earth, but are physically related to coherent stress patterns existing in past time. Definable stress patterns produce defined directions of primary and relief faulting, and the recognition of these directions can be used to indicate that what appears as a simple structural axis must actually be a series of highs en echelon. 24. Where the surface to be mapped is an erosional surface, the constraints of geological plausibility are different, and this must be reflected in the contouring. Where the erosion is recognized as subaerial, the contoured surface can be as rough as the present surface of the earth- -mountainous for short periods of erosion, peneplaned for long periods. Where the erosion is recognized as submarine, the erosional features are generally broader and smoother. 25. If the line spacing for the seismic grid has been chosen to give adequate definition of structural features (as it often is), it may be too coarse to allow contouring of erosional features. This situation must be handled with great care; clearly the sampling theorem demands that we smooth out (that is, antialias-filter) any irregularity whose "wavelength" is less than double the line spacing, but this may cause us to miss interesting features whose orientations we might be able to predict from our knowledge of the geological history, or as continuations of features established by the closer control near line intersections. 26. The most important illustration of this, in an exploration sense, is in the recognition of channeling into an erosional surface. We must recognize that the contouring of a channeled surface cannot be done with the same interpolations used for a folded surface. In effect, the channel and the surface into which it is cut must be contoured independently, within the constraint that at the time of the erosion the channel must have had a gradient appropriate to its type. 27. The constraints of geological plausibility are different again for certain types of mounding. Thus the contouring applicable to the upper surface of an ancient dune complex should take into account the known angle of repose of the sand dunes, while the contouring of a barrier reef complex should be that appropriate to the known behavior of reef-building colonies. In more general terms, the conclusions from our seismic-stratigraphic analysis of the sections (depositional mechanism, depositional environment, change of relative sea level, and so on) must be used to define the appropriate criteria of geological plausibility that we use in the contouring. 28. The final check that we make on every contour map is that it accords with the original sections. (This may seem superfluous to the novice, but the expert knows better.) If we have constructed the map at an interactive terminal , we call for a vertical profile across the map at the position of each seismic line, and superimpose the profile on the seismic section. If we are working with paper maps and sections (at the same scale), we fold each section along the time origin and align the fold along the appropriate line on the map (just as we did when transferring the features initially), and check that the final map honors the section. 29. Then it remains only to sign and date the map. The Ethics of Contouring There are no sacred tablets, no commandments graven in stone. But there are expectations--expectations from our colleagues the geologists, expectations from the exploration manager, expectations from our own company, and expectations from the company to whom we are selling a field or making a farmout. So let us look at the reasons for these expectations. Perhaps our company has total ownership of a discovered field, on which we now wish to make a final map as input to the design of a production system. Then the effort is clearly to make the map "right," and the contouring must be totally objective. But in exploration the situation may be different. The company may operate by the dictum, "It is no great sin to drill a dry hole, but it is a sin to miss a field." If this is so, our chief concern in contouring is that we do not miss any feature that might be a field. We are led to contour optimistically; if there is no direct evidence for a high between lines, but such a high is possible, we are tempted to put it in. And if our company is interested in farming out or selling the prospect, the temptation is increased. Clearly, there must be some constraints on this; we need to formulate the ethics of contouring. Any simple test of objectivity in contouring would probably be derived from Ockham's Razor: It is vain to do with more what can be done with less. In other words, the contouring should represent the simplest interpretation in accord with the facts. Thus, if there is a large void between lines, but no indication of a high on any of the sections bounding the void, we cannot show a high in the void. But if there is even a subtle indication of a high on one of the lines, we should show a high--of a scale and extent reasonably compatible with the subtlety of the high on the one section. We should not show on our basic map a high of the largest scale and extent possible in that void. However, we may prepare on a second map (or show as an inset on the basic map) this most optimistic interpretation allowed by the data. Thereby we establish a range of possibilities, from which the exploration manager can make a judgment of the balance between risk and reward. In more complex situations the application of Ockham's Razor may not yield a single clear answer, because which interpretation is "simplest," in geological terms, may be disputable. But the rule remains a good generalization: make the simplest interpretation that is in accord with the facts. Exercise 1/ Figure 1 offends which rule of contouring?
Figure 1
1. Closed high within a low 2. High broken by a low 3. Low trend aligned with a high 4. Not simplest interpretation Solution In Figure 2 , notice that the contour value 821 on the north-south line is given too much weight, pulling in contours from both east and west. This forces an unnatural appearance on the contours, one that is hard to explain geologically.
Figure 2
Figure 3 shows that a much simpler interpretation exists.
Figure 3
The correct answer is choice 4. Not simplest interpretation. 2/ Figure 1 offends which rule of contouring?
Figure 1
1. Perpendicularity between contours and axes 2. Closed high within a low 3. High broken by a low 4. Contours not the same or successive at edge of map Solution In Figure 2 , the northernmost lines show highs, but the interpreter has made no attempt to connect them on an axis.
Figure 2
Instead, the contouring shows low areas between these highs, with no real support from the data. Figure 3 establishes a high axis, which in turn helps to establish a regional strike.
Figure 3
The correct answer is choice 3. High broken by a low. 3/ Figure 1 offends which rule of contouring?
Figure 1
1. Loose ends 2. Like contours running parallel 3. Closed high within a low 4. Naturalness Solution In Figure 2 , the eastern portion of the map is difficult to interpret because of parallel 1430 contours.
Figure 2
Figure 3 , on the other hand, yields a more logical geologic pattern.
Figure 3
The correct answer is choice 2. Like contours running parallel. 4/ Figure 1 offends which rule of contouring?
Figure 1
1. Closed high within a low 2. High broken by a low 3. Contours traceable from "in" point to "out" point 4. Sideswipe unrecognized Solution In Figure 2, we see that the closed 810 contour in the northeast and the closed 830 contour in the southwest are in line with a local low, represented by the closed 840 contour between them.
Figure 2
The interpreter was probably forced to this conclusion by the closed 830 contour in the central portion of the map, a high feature in the middle of a broad low. Figure 3 aligns all the highs.
Figure 3
The correct answer is choice 1. Closed high within a low. 5/ Figure 1 offends which rule of contouring?
Figure 1
1. Like contours running parallel 2. Does not reflect trend 3. Loose ends 4. Contours not the same or successive at edge of map Solution Refer to Figure 2 .
Figure 2
One of our contouring guidelines is that highs should align with highs, and lows with lows. In other words, contouring should show these geologic trends wherever possible. Figure 3 shows such trends.
Figure 3
The correct answer is choice 2. Does not reflect trend.
CONTOURING BY MACHINE The actual drawing of the contours is only one part of the over-all task of arriving at a valid contour map. So the first point to make, in introducing contouring by machine, is that the machine is not going to remove all our problems, all our agonies of decision. The need to solve the mistie problems remains, as does the need to determine and use all the evidence for trends; further, we still need to determine and use all information on the type of geological surface that we have picked. The machine, however, will do the interpolations, the arithmetic, and the construction of the contours. The second point to make is a general one that applies whenever we computerize an operation previously done by humans--we have to say exactly what it is that the human has been doing. We can no longer say that we "just eyeball it," or "whatever looks right," or "well, more or less." We have to suffer the irritation of having to be precise, and formal, and unambiguous. However, when we have done this, we have the solace of knowing that the machine will always apply the resulting rules consistently and objectively. The consequence of all this is that whenever we can define exactly what we mean by the oft-used term "geological plausibility," we can use the machine to relieve us of some of the drudgery. For example, we may be mapping a time- stratigraphic surface contained within a package of generally conformable strata, and be quite confident that there is a physical limit on how tightly such a sequence may be folded without cracking. Seeing no evidence of faulting on the sections at this level, we may be prepared to state a minimum "tightness of fold" (in terms of a minimum radius of curvature--as computed from rock mechanics, or just as seen on the sections) . We may be prepared to give different values for the dip and strike directions. Then we can tell the computer program to contour the picked values within these constraints. Nowhere on the resulting map will there be a tighter fold than we have allowed; this imposes a justifiable measure of smoothing on the map, while the difference between the dip and strike directions imposes some sense of trend. This example is a realistic one; if there is any surface in the earth for which we can hope to define "geological plausibility" in mathematical terms, it is a time- stratigraphic depositional surface within a conformable sequence, subjected to mild folding. As we allow more complexity into the geological processes (fracturing, faulting, inversion, overpressure, and the relief of overpressure) , it becomes more difficult to define geological plausibility; often the best we can do is to establish limits on the behavior of a surface from our previous observations over the general area. Consequently the greatest historical use of machine contouring has been in the mapping of fairly simple depositional surfaces. Meanwhile, research continues to build the constraints of rock mechanics and other emerging technologies into the contouring programs, so that they can handle a wider range of situations. For the present, it is clear that common complexities (such as tear behavior near faults, or the presence of two or more superposed trend systems) require that the contouring program must be amenable to guidance and editing from a human interpreter; this is the rationale for a second stage of machine contouring at an interactive workstation. It is also generally acknowledged that there are some contouring problems for which no machine solution currently exists; one case in point would be the contouring of permafrost corrections, and many interpreters would also include the contouring of interval velocities in any area subject to local fracturing. The general conclusion from this, of course, is an exact counterpart to one of our conclusions about contouring by hand; just as we adjust our style of manual contouring to the type of geological surface we are mapping, it is imperative that the program we use in machine contouring must be designed for the type of surface. In contouring by machine, as in contouring by hand, it would be a crass mistake to contour an erosional surface incised by channels with the same approach that we would use for a folded depositional surface. Before we immerse ourselves in the technology of contouring by machine, we might note that there are still some interpreters who are not interested. One very human reason is that, being good at it, they just like contouring. Another reason, perhaps, is more cogent: in contouring a prospect by hand, at several levels, the interpreter acquires a very good feeling for the prospect. He sees the alternatives, he knows which picks are critical, he sees the geological processes at work, and he understands the risks. Therefore, as in so many other modern activities, our search is to find the best interaction of mind and machine; to use the machine to extend the power of the mind, and to feed back the insights of the mind to enhance the expertise of the machine. Possible Approaches To the human interpreter, data to be contoured consist of many time (or depth) values, posted along the seismic lines. To the machine, the data become sets of three-dimensional coordinates x, y, and z, where x and y are the map coordinates and z is the time (or depth) value. The problem for the machine, then, is to fit a surface to these coordinates. One possible way to pose the problem is suggested by Fourier analysis. We can view our picked horizon on the section as a waveform in space --a waveform which (although perhaps geologically continuous over the whole basin) is made transient by its truncation at the ends of the line. Then Fourier tells us that this waveform can be regarded as composed of sinusoidal components whose spatial frequencies are harmonics of the "fundamental" represented by the length of the line. If the structure is smooth and simple, few harmonics are required; if it is faulted, a large "bandwidth" becomes essential to handle the sharp step. Since we could do the same on both dip and strike lines, it is not difficult to accept that any geological surface can be mathematically represented as a two-dimensional transform having a number of terms that relates to the tightness of the folding (or to the faulting). Then one approach to machine contouring is to start with a simple surface having few terms (a "low order") , to compute the error between that surface and the given values along the seismic lines, to add more terms and reduce the error, and so to iterate toward a mathematically defined surface that fits all the data points. Over a large area, anything but the simplest geology requires a large number of terms, each with its "amplitude" and its "phase"; the number of calculations is formidable. In some cases, it may be more efficient and economical to apply this surface-fitting technique only on a local basis; for example, a less demanding approach may be used for most of the data, and the full surface-fitting technique used only to "patch" particular local areas. Of the available methods for less expensive contouring, two have proved particularly popular: the grid method, and the triangle method.. Both these basic methods have recently been augmented by many proprietary sophistications. Because our concern is to deal with fundamentals rather than to endorse a particular product, we shall keep the discussion at a general level. Before we turn to the two main methods, we should note that the problem of machine contouring is much more difficult for seismic data than for most other types of data. We recall that gravity and magnetic surveys on land usually take their measurements on a regular grid, or at worst (in difficult terrain) on a random basis; in general the data are fairly evenly spaced. Three-dimensional seismic surveys have the same feature. Two-dimensional seismic surveys, however, yield very closely spaced data along the lines, but no data at all between the lines; much of the complexity of modern contouring programs is made necessary by this complicating factor. Gridding Approach: Grid Sizing Figure 1 illustrates the type of data obtained from 2-D seismic lines.
Figure 1
In contouring programs that use the gridding approach, the key operation is to replace the in-line data of Figure 1 (a) by the rectangular grid of data values in Figure 1 (b). It is very important that the size of the grid be appropriate to the data, and appropriate to the contour interval finally required. Figure 2 illustrates the contours obtained from gridding a particular data set with a grid of 11 x 7,
Figure 2
while Figure 3 and Figure 4 illustrate the effect of gridding more closely, at 21 16 and 41 61, respectively.
Figure 3
Clearly much detail is lost by the coarse gridding.
Figure 4
This detail is lost forever; it cannot be recovered, later in the program, by calling for a finer contour interval. Figure 5 repeats the left half of Figure 4 , but with an even finer grid (61 101).
Figure 5
Now we can see, as intuition would suggest, that there is a point beyond which no further detail is added. Comparing the two figures, and taking note of the original data points in Figure 2 , we can see that this "adequate" grid density depends on the original data density (again as we would expect) . So there is the problem in gridding seismic data: in some places (for example, where several lines come together) the data density warrants a small grid, while in others (for example, between lines) a small grid means a lot more computation without any benefit. The ideal, therefore, is that the computer should first analyze the data density, and then assign an appropriate (and therefore variable) grid size in different parts of the map. Thus, in each area of the map, characterized by its data density, we might expect that the number of grid points would be broadly comparable with the number of data points (so that, in a gross sense, the gridding operation would keep the information content constant). Modern programs running on fast machines allow this to be done; the interpreter must ask whether his program is of this type. If it is not, the interpreter must expect a compromise grid size. Is the final map at risk by reason of that compromise? It could be. Where we know that there is some critically important feature in one part of the reservoir, and we have deliberately shot lines closely spaced in that area, any such compromise of grid selection must defeat our purpose, and waste money. But in general, we have already observed the honoring of all detail along all seismic lines imparts an "unnatural" appearance to the map, and therefore a compromise may be acceptable if it is a good one. For present purposes, the point to stress is that the appropriate choice of grid size is very important, and that an error made in this choice cannot be recovered by subsequent manipulations. When the novice first uses a contouring program, therefore, he is happy to see that the machine makes so many choices for him; as he becomes more expert, he starts to worry about those choices. So, now and then, on a particularly important map, he overrides the machine's choice of grid size, and halves it; this gives him a good feeling for any additional detail that is available from the data, and the additional cost incurred to obtain it. Calculating Grid Values When a grid has been chosen, the contouring program proceeds to calculate new values (grid values) at each intersection (node) of the grid. These values subsequently replace the original data values, so we must think rather carefully about how to compute the grid values from the data values. If a grid node happens to coincide with a data point, the grid value should clearly honor the data value. If a grid node happens to fall on a seismic line, between two data points, then we would expect to get the grid value by a simple interpolation. But for a grid node between seismic lines we have more of a problem; we must look in all directions from the grid node to find data points, and then we must average the data values in some way that ascribes the greatest weight to the nearest values. Figure 1 attempts to formalize our thinking.
Figure 1
Along the horizontal axis we plot the distance from the grid node to a data value, and along the vertical axis we plot what we might loosely call the "relevance" of that data value to the calculation of the grid value. If the distance is very small the data value substantially determines the grid value, but as the distance increases it becomes less and less relevant. Indeed at some distance (here called the "range") we may judge that the data value is no longer material to the calculation. Now we can apply this to Figure 2 , in which the small triangles represent the grid nodes and the small squares represent the data points; the exercise is to compute the grid value at the center of the circle.
Figure 2
The radius of the circle represents the range as defined in Figure 1 ; we shall take into the calculation only those data points within the circle. We wish to give the greatest weight to the closest data point (C), and less weight to the more distant data points (B and D). Because the data points and the grid nodes are all kept in the machine as pairs of rectangular coordinates, the calculation of the distances from grid node to data points is done by Pythagoras. Consequently, a weighting system based on the inverse square allows us to avoid taking a square root. The most common weighting functions are therefore the inverse square and the inverse fourth power; the inverse sixth power has also been used. The value adopted as the grid value, according to this scheme, is the average of the used data points, weighted by their distance and adjusted to bring the sum of the weighting factors to unity. We need to realize two major consequences of this. The first is that the computed grid values may no longer honor the data values; where a grid node coincides with a data point, the influence of other data points may cause the two values to differ. The second major consequence is that, by the choice of grid size, range, and weighting function, we have effectively limited the geology that the program can handle. This second consequence is worth stressing, because the mathematical selections made within the program may not be revealed to us, and because--even if they are--it is not easy to interpret them directly in terms of the tightest fold that we shall be able to see on the map. Again we see the wisdom of an occasional test, on critical data; if the program allows us to change the weighting function (giving us several displays as typified in Figure 3 ), we can check that we are not losing anything important by accepting the program's inbuilt choices.
Figure 3
Even as the programs become smarter and smarter (automatically adjusting the grid size, the range, and the weighting function to the data), there is still some wisdom in checking that the program's inbuilt decisions are appropriate to our prospect. That is one of the reasons that we should preferably regard machine contouring as a two-stage operation: an initial hands-off operation controlled by the program's inbuilt wisdom, and a later interactive stage where the interpreter at the workstation tests the sensitivity of the map (over critical areas) to the variables chosen by the program. The next problem that the program must tackle is that the averaging technique described above is not appropriate for computing the grid values near highs and lows. The problem is illustrated in Figure 4 (a); here we are computing the grid value at the node position shown dashed, and for simplicity we assume that the only data points (D1 to D4) within the range are along a line through the node.
Figure 4
Then the effect of the distance-weighting is that the new grid value is an average of a large contribution from D2 (shown projected to the grid node), a smaller contribution from D3, and yet smaller ones from D1 and D4. Overall, this yields the distance-weighted average ZN, and suggests the surface shown in Figure 4 (b). The weakness of the technique is now obvious: the node value, being an average, can never be higher or lower than the contributing data values, and any true high or true low at the grid node is suppressed. The contour map becomes full of small flat areas. This can be overcome with the technique of dip projection. In the simple two- dimensional view of Figure 4 (b), we could compute the dip between data points D1 and D2, and project it to the dashed line at the grid node; then we could do the same for data points D3 and D4, and average the two resulting intersections at the node. In the full three-dimensional world the task is more onerous, but the principle is the same; each grouping of three data points defines a dip plane, which can then be projected to the grid node (usually within limits set to constrain "wild" dips). A two-dimensional view of this is suggested in Figure 4 (c); the new distance-weighted average ZN reveals the high. Alternatively, we can use the surface-fitting techniques discussed at the beginning of this chapter. We can consider using these, we remember, over local areas; in the present application, over the small extent of one circular range, the number of data values is manageable, and the surface can be assumed simple enough to require few terms. When a surface has been fitted acceptably well to the data points within the range, the value to be ascribed to the grid node follows immediately ( Figure 4 (d)). The program must also tackle another problem consequent on the highly variable density and distribution of seismic data points. Figure 5 ,
Figure 5
Figure 6 ,
Figure 6
Figure 7 and Figure 8 illustrate that this problem is not a trivial one.
Figure 7
In Figure 5 a contour map representing "truth" has been drawn from 100 data points.
Figure 8
The other three figures show contour maps derived from three different groupings of about 20 of these data points, as indicated; clearly they are very poor representations. Figure 9 shows how some of these harmful effects arise.
Figure 9
Grid node A has many data points within its range, and they are distributed fairly well in direction. Grid node B, on the other hand, has only two data points within the same range, and they are both in a northerly direction. A grid node only slightly to the southwest of B would have only two points in that different direction, as would a grid node only slightly to the southeast. If the data values in these different directions are themselves different, wild swings must occur between closely adjacent grid values; this is the unrealistic effect we are seeing to the northeast and northwest of Figure 7 . One solution to this problem is that of quadrant or octant searching. In Figure 10 we take grid node B from Figure 9 and divide it into four sectors.
Figure 10
In the original range, the two data points lie in only the northwest quarter; the other three quarters are empty. To the southwest and southeast, the two nearest data points may be found in each quadrant only slightly outside the range; to the northeast two may be found by going a little farther. A suitable course is clear: insist on two data points in each quadrant--the nearest two--and rely on the distance weighting (within a much-extended range) to reduce the contribution of distant points. Figure 11 extends the same technique, demanding two points in each octant.
Figure 11
These techniques do much to overcome the unrealistic effects seen in Figure 5 , Figure 6 , Figure 7 and Figure 8 . To do so, however, they may have to consider more data points and greater distances than before; this increases both the cost of the contour map and its degree of smoothing. It is not easy for the interpreter to assess what appear to be rather arbitrary choices--the choice of octants, the choice of the number of points in each octant, and the choice of the distance weighting--in terms of the limit they impose on the maximum dip that the map can finally show. One way to get some feeling for this, at the interactive stage, is to call for a color-coded plot of the failure of the contour map to honor the original data values. Constructing Contours Once all the calculations are made at the grid nodes, the original and irregular data values are replaced by the regular grid values. The operation of drawing the contours themselves then becomes comparatively simple. Figure 1 illustrates one grid cell, having the grid values 39, 55, 70 and 45 at its nodes.
Figure 1
The "entry" points for the contours, on the west and south sides of the cell, have already been found; we can quickly verify that these entry points do represent linear interpolation between the values at the nodes. Since any contour entering a cell must also leave it, we can now calculate the "exit" points along the east and north sides of the cell, again by linear interpolation. As a first approximation, the desired contours are given by straight lines between corresponding entry and exit points. The operation then proceeds to the next cell, for which the previous exit points become entry points. This builds up the contours as successions of straight segments; each segment represents one grid cell, and each may be defined by the coordinates of entry and exit points, or as a vector. Figure 2
Figure 2
and Figure 3 remind us that the problem of a saddle, which arises when contouring by hand, is not removed merely because the work is being done by a machine.
Figure 3
In contouring the grid values of Figure 2 by hand, we would immediately recognize the ambiguity in the center cell. So we would sketch in the intermediate contours at 7, 6, 4, 3 and 2 (whether or not we were actually planning to display these contours on the final map), and check whether they favored one or another of the alternative contourings. If not, we would probably check whether the southeast high was in fact a nose open to the southeast (or the northwest high a nose open to the northwest); then considerations of permissible optimism would allow us to adopt the full contours within the center cell, and so close one of the highs. If, however, both highs were closed, permissible optimism would allow us to adopt the dashed contours, and so increase the closed area. Thus, while recognizing (and flagging) the ambiguity, we would be able to find other input to resolve it. Unless and until our programs have the same degree of sophistication, the machine looks at the center cell and cannot choose between the straight-line segments of Figure 3 . Some arbitrary rule (such as "high ground to the right") must therefore be written into the program. Again the map should be flagged to indicate the ambiguity; then the human interpreter can address the options at the interactive review stage. Figure 4 illustrates that the straight-line contour segments obtained by the above techniques are not acceptable (unless perhaps the grid size is extremely small).
Figure 4
The contours must be smoothed. This is not as simple as it sounds; the smoothing of individual contours is not generally satisfactory, because it does not guarantee proper contour spacing, and may even cause the contours to cross. Instead, the program must find some way to smooth the surface represented by the contour segments, and then call out the contours from that smoothed surface. Figure 5 shows the map in which the contours have been derived from local surface-fitting over a moving and over-lapping array of grid cells; this may be done, for example, by finding the best fit of a paraboloid to the array of grid values.
Figure 5
Gridding Approach: Advantages and Disadvantages Most of the gridding approachs advantages flow from the use of rectangular coordinates. It is easy to compute closed areas and closed volumes; it is easy to apply two-dimensional smoothing techniques; and it is easy to subtract one structure map from another structure map to obtain a valid interval map. It is also easy to construct isometric (or perspective) displays of the contoured surface; Figure 6 (the spikes represent wells and the distance scale is only for the near portion of the display) shows such a construction.
Figure 6
Further, the gridding approach appears to remove the bias associated with clustered data points (although, as we have seen, this may be an advantage or a disadvantage) . One disadvantage of gridding is the total dependence of the result on the choice of grid size, which is made right at the beginning; this becomes less serious as faster machines allow smaller or variable grid sizes. A second disadvantage is that the method may not honor the data values. A third is that the method is weak near the edges of the map; independent maps of contiguous areas require a harmonization step before they will tie. (Of course, this is true to some extent with any system, but it tends to be worse with a gridding method.) A related disadvantage is that the gridding approach does not really lend itself to the treatment of faults. In computing the grid values from the data values, the quadrant or octant search must be switched off when it reaches a fault; this inevitably means that the grid values near the fault are less soundly based than usual. Commercial gridding programs apply considerable ingenuity to the minimization of this problem. For example, the data values on the far side of the fault may be allowed some small contribution, as in Figure 7 , by computing their weighting distance around the ends of the fault.
Figure 7
A constraint may be included on the rate-of-change of throw along the fault (if we know how to specify such a constraint). And, for those cases where the faulting occurred after the folding, the program can take out the fault, and harmonize the structure across the fault. Nevertheless, it probably remains true that the gridding approach is not a "natural" way to accommodate faults. The program is forced to take steps that appear contrived and artificial. Indeed, as we seek to build into the program more and more of the criteria that a human interpreter would use, we get the feeling that it might be better to use a different basic approach--one that would follow more closely the mental workings of the human interpreter. This leads us into a discussion of the triangular approach to contouring, which is contained under the heading of "Contouring by Machine." Triangular Approach Figure 1 attempts to analyze what goes on in the mind of a human interpreter as he contours.
Figure 1
He brings the contour to the point X as an interpolation between data values A and C, and then looks at the triangle ABC to see where the contour goes next. It may go out through AB or BC, or it may double back and go out through AC; if it is through AB, he takes the contour out as an interpolation between A and B. As a refinement, he may deviate from a linear interpolation to take account of marks he has made to indicate a high or a low, or to take account of evident changes of dip occurring just outside the triangle ABC, or to reflect less weight given to a distant data point. To computerize this operation, we start by linking all the given data values ( Figure 2 (a)) to form an array of triangles ( Figure 2 (b)).
Figure 2
Then, for each contour entering a triangle, we can find where it leaves that triangle and enters another; each individual contour is thereby traced across the map ( Figure 2 (c)) . As long as we are content with linear interpolation, this approach is delightfully simple, and very fast. To simulate the human interpreter, however, four additional concerns must be addressed: the answer must not depend on the way the triangles are chosen; the contours must be made smooth; the highs and lows must be added; and the interpolation must be modified to take some account of dips evident in the surrounding triangles. Figure 3 illustrates the first concern; clearly, in the quadrilateral PQRS, the linkage may be made from Q to S or from P to R.
Figure 3
A rule that makes the choice unique, and that also mimics what the human interpreter would do, is to prefer the linkages that make the triangles most nearly equilateral (that is, QS rather than PR). This decision, however, raises the equivalent of an earlier problem: with the fairly regular data of Figure 3 the triangles can be close to equilateral, but for seismic data along lines ( Figure 4 , the nature of data is not conducive to equilateral triangles) the triangles tend to be long and thin.
Figure 4
The first task of the program, then, is to optimize this choice of triangles. If the best that can be done is not good enough, "fake" data points must be inserted to make the geometry acceptable, and the values to be assigned to these points must be computed by distance-weighted averaging techniques. Our second concern is to make the contours smooth; the straight-line contour segments must be made shorter. Figure 5 gives us the key to this; divide each triangle between data points into many similar sub- triangles, and trace the contour across each of these smaller triangles as before.
Figure 5
Now we must confront the other concerns above; we have to put in the highs and the lows, and we have to do better than linear interpolation. At this stage the triangular approach faces some of the same problems encountered with the grid approach: in effect, dip projection for the highs and lows, and the proper balancing of near and distant data. It must do so, however, without the advantage of the simple coordinate system provided by a grid. As with the grid approach, the critical step is the fitting of a mathematically defined surface to some local grouping (or patch) of data values, and then the use of that smooth surface to impose local smoothness on the map. Let us take this concept a little further (though still at a simplified level) using Figure 6 .
Figure 6
We propose to fit a "standard" curved surface to the three data values A, B and C, and so to define the contours within the shaded triangle. Immediately it is evident that if this surface within the shaded triangle is to be continuous with the surfaces fitted to the surrounding triangles, the surface must fit not only the data values but the dips at A, B and C. To estimate the dip at point A requires the calculation of the seven dip planes ALD, ADE, AEB, . . . , AKL. At point B it requires five, and at Point C six; on average, the calculation of the three dips requires 18 dip planes, using 12 data values. Once we have the three data values and the three dip values at A, B, and C, we can nominate a suitable standard surface (for example, a paraboloid or a hyperboloid) and find which "size" of surface fits both the data values and the dips. Then we can compute the values of that surface at the corners of all the sub- triangles of Figure 5 . That done, we can trace the contours across the sub-triangles as before; in general the contours will be curved, but formed of a number of straight-line segments. Figure 7 ,
Figure 7
Figure 8 ,
Figure 8
Figure 9
Figure 9
and Figure 10 illustrate results obtained with a commercial program.
Figure 10
(This program, although conceptually related to the above description, actually uses 18 data points in the calculations for each triangle.) The original data values and the original triangles are the ones shown in Figure 3 . The result of dividing each side of each triangle into four is shown in Figure 7 and Figure 8 , and the result of dividing into 16 is shown in Figure 9 and Figure 10 . We have, then, a straight choice between contour smoothness and cost; this allows us to use a small "smoothness factor" of 4 for an initial review map, an intermediate value of 8 for our work maps, and a value of perhaps 16 for final maps. One of the advantages of the triangular approach is that the contours do not change appreciably with the choice of the smoothness factor. This is in contradistinction to the choice of grid size in the gridding approach, which changes the contour smoothness but may also change the contours themselves. Other advantages of the triangular approach are that it honors every data point, and that separate maps spliced together in adjacent areas are more likely to tie than maps made with a gridding approach. It is also true that the triangular approach provides a map resolution that automatically reflects the density of the data; detail is preserved where it exists, and does not appear to be present where it does not exist. There is no risk of losing the benefit of closely spaced lines deliberately shot to resolve some critical feature. However, there remains the caution--as with any mapping system--that contours drawn in areas of sparse data are highly uncertain; the interpreter must insist that the program identify (by dashed contours, or otherwise) where the control is too sparse for the contours to be trusted. We must also remember that the triangular method is still making hidden judgments about the geology, although perhaps these are not as artificial as the judgments made in the gridding approach. Another advantage of the triangular approach is that it offers a rather more "natural" treatment of faults. Thus, provided that the digitizing of the horizon on each line includes values on the immediate upthrown and downthrown sides of the fault, the upthrown edge can be made to lie along one side of a triangle, and the downthrown edge likewise. In effect, we can look along the upthrown edge, and along the downthrown edge; the triangular mesh is "unzipped" along the fault trace. Figure 11 shows how, at the interactive stage, we can insert the geological likelihoods of fault curvature and throw variation by inserting dummy points F1-F4 along the fault, and by using them to establish new triangles.
Figure 11
The programs then allow us to display an isometric view of the fault face, and so to check its plausibility. Although the calculation of closed areas and closed volumes can be done by maintaining the triangular approach, there are some situations where the gridding approach has the advantage. Then the contour map obtained by the triangular approach may itself be gridded, and the gridded data used in the calculations of area or volume. Interactive Workstations In years past, the choice between hand contouring and machine contouring was a matte of sometimes heated argument. Today, the grounds for such argument have disappeared. One contributing factor is the increased power of the hardware; this has made it possible to add many software sophistications, and so to improve the degree to which the machine mimics the human interpreter, while still preserving a substantial cost advantage. The other factor has been the rise of the interactive workstation, which provides the best of both worlds. The combining of the interpreter and the machine, at the interactive workstation, has an altogether new power. Some features of this new power are as follows. The contouring program is no longer obliged to come up with an answer--regardless of how far out it must reach for a data point, or how unsound the interpolation. It can identify and present the problem, and ask for help. The program can test different variables, present the resulting contours, and ask the interpreter whether the difference is material or not. If it is, the evidence for a judgment can be gathered, and the outcome of different judgments displayed on the screen by dissolving from one to the other, or in split-screen mode. To assist the accommodation of geological insights into the decisions, the machine can display sections across the contour map, fence diagrams, rotatable three-dimensional projections, interval maps, interval sections, interval fence diagrams, and three-dimensional views lit at different sun angles. A particularly satisfying presentation is the rotatable "floating contour" display of Figure 1 .
Figure 1
The program can accommodate the geological biases of the interpreter, by inserting defined trends (of variable intensity) into the contouring calculations. This makes it possible to state and quantify the influence of a trend--something that has certainly defied the human interpreter. The program can perform many checks on the combined interpretation as it proceeds. Thus, faults can be restored, and the balance of volume before and after faulting can be checked. Different alignments of channels can be tested, with the base of the channel restored to a plausible gradient. And effects ascribed to sliding or subsidence or compaction can be tested for quantitative reasonableness. The mind can run on, visualizing the possibilities raised by the combination of man and machine. However, for the foreseeable future it seems likely that the final judgment will rest with the human interpreter. In the present, as in the past, the expert interpreter is an expert contourer. Now, however, he needs an additional skill: to understand how the machine contours. He must know what it does when the data are dense, what it does when the data are sparse, how it looks at faults, what it does to highs and lows, what it does at the edge of the map, what it does in the face of irresoluble ambiguity, and what it does when no contouring could possibly be honest. Now he needs to be a double expert.
INTERVAL MAPS In several ways, the contouring of a time interval is easier than the contouring of time structure. First, we are vastly relieved to see the end of several types of mistie: statics, datum corrections, polarity, and reflection character. We do not see the end of processing misties that are time variant: differences of stacking velocity or mute from line to line, time-variant filters that do not follow the geology. And where dips are significant, we do not see the end of navigation misties or migration misties. However, misties are so irritating, and therefore seem to consume so much of our time, that any reduction is very welcome. The other ways in which the contouring of an interval is easier are concerned with the geological constraints. Thus the agencies affecting structure include the total regime of compressional and extensional force throughout all time, plus the reaction of different rock materials in terms of flexing, fracturing, faulting, sliding, or flowing--in fact, all the complexities of rock mechanics. The agencies affecting an interval, however, are just deposition, erosion and compaction--agencies that are much better understood. Obviously the most direct way to make an interval map is to subtract the time (or depth) of one horizon from that of the other, at each data point, and to contour these intervals. Then, in the contouring, we are guided not by the geological constraints on structure but by the simpler (and stronger) constraints on thickness. We may also make an interval map by contouring the two horizons as structure maps, and then by subtracting one map from the other. This is readily done, for example, with machine-made maps using either the gridding or triangular approach. However, only for the simplest structures could this be as good as the contouring of the interval itself, because the benefit and strength of the simple constraints on thickness have been thrown away. What we must not do, of course, is contour the two structure maps and the interval map independently. The interval map may be viewed as contouring the interval between two structure maps, or (better) the lower structure map may be viewed as the upper structure map plus the interval. But we must not make a suite of three maps that agree only at the data points. The check that we can make on this is illustrated by overlaying the structure maps. This also gives us the method for constructing an interval map from two structure maps contoured by hand; we derive a supplementary data point each time a contour of one structure map crosses a contour of the other structure map, and we contour the interval using the combination of original and supplementary data points. There are two cautions we should keep in mind. One is that the picking or digitizing error on a structure map may be doubled on an interval map. This does not concern us too much where the interval is smoothly varying; we can smooth out the "jitter" before (or as) we contour. It can be a problem, however, where we are mapping an interval to find a river channel, or as an indicator of the possible proximity of reefs. Another caution arises from purely seismic considerations, in the presence of significant interval thinning and in the presence of dip. Thus, Figure 1 (a) reminds us that where the interval is thinning, the normal-incidence paths to top and base are not coincident, so that the time difference does not strictly measure the interval at all; this is true even if one of the reflectors has no dip.
Figure 1
It also reminds us that in the presence of dip the interval measurement on the unmigrated section is not made at the "right" place. However, the interval measurement on the unmigrated section is made (probably as we would wish) approximately normal to the layer, although of course it is presented on the section as a vertical interval. On the migrated section of Figure 1 (b) the measurement (which again is presented as a vertical interval) is indeed of the vertical interval; this is probably not as we would wish. The measurement is in the " right" place, but its accuracy depends on the validity of the migration (in particular, the aptness of the migration velocities). In Figure 1 and Figure 2 , then, we can see the justification for the earlier comment that the contouring of intervals still leaves us exposed to migration misties.
Figure 2
Despite these cautions, the making of interval maps is an extremely useful part of exploration. There are four major uses of the interval map, as follows. 1. The interval map representing the thickness of certain geological units has a characteristic form, by which the unit may be recognized. Figure 2 (schematic of a fan-delta system) and Figure 3 (isopach of the late Pennsylvanian S3 sandstone in the Mobeetie field of the Texas panhandle) is an illustration, for the case of a fan delta.
Figure 3
The interval map (in this case an isopach map) reveals the direction of sediment transport; when coupled with an understanding of the depositional mechanism, the isopach map indicates where the best reservoir quality is likely to be found. Another important example is the erosional channel. This may be revealed as a thickening in the unit above or a thinning in the unit below; in either case the critical diagnostic is the appearance on the map of a recognizable drainage pattern. A third important example is the identification of a reef (or other body resistant to compaction) by the differential compaction of an interval above it. 2. As suggested above, the preparation of successive interval maps is often the best way to contour structure. Thus we agree that the shallowest reflector is likely to be the least structurally contorted, and hence the safest to contour first. If we then map the interval between first and second reflections, the strong geological constraints on the thickness of an interval allow us to be confident of our interval map . . . which we then add to the first structure map to obtain the second structure map . . . and so on down. 3. This approach becomes very important where the direct contouring of the deep structure is made problematical by the sparseness or the quality of the data. Figure 4 makes the point for well data, but it is equally applicable to the case where deep reflections are poor, or lacking in continuity.
Figure 4
If the contouring of the deep surface uses only the deep data it will be as shown dashed, which is geologically unsound. Again, a better course is to map the shallow structure, with its abundant control; to map the interval, with its poorer control but strong geological constraints; and to map the deep structure as the sum of these two maps, obtaining the configuration shown by the full line. This technique is particularly important where a highly critical surface (such as the top reservoir) generates only a very weak or sporadic reflection, but where a surface just above or below generates a reliable and mappable reflection. The information on the interval may be obtained from the sporadic reflection where it appears, or from well control; in either case the strong geological constraints on the interval are available to guide the contouring. 4. Perhaps most important of all is the use of interval maps to unravel the geology and its history. Examples include updip thinning indicating uplift, updip thinning indicating a gradient change in the depositional surface, downdip thickening indicating subsidence, downdip thinning indicating sediment starvation, and spurious downdip thinning caused by a velocity increasing with depth. All of these are important, but the one that is often critical to a prospect is updip thinning indicating uplift. One situation we delight to discover is that of a structure that has been growing gently over a long period of time; sandstones tend to be coarser over the high, limestones tend to have local porosity, and the chances of a trap at the time of hydrocarbon migration are maximized. Indeed, there are several petroleum provinces in which barren structures exist side-by-side with prolific structures, and where the key is to time the structural uplift as predating (or coinciding with) the period of hydrocarbon migration; the barren structures are the ones formed after the hydrocarbons had passed through. We may also make good use of the technique of datuming, which involves constructing the attitude of deeper horizons at the time of deposition of a shallow reference horizon; clearly this is conceptually equivalent to the mapping of intervals. Not only is there a revelation of depositional history; often we begin to see subtle features that would be totally missed on a structural map. Figure 5 ,
Figure 5
Figure 6
Figure 6
and Figure 7
Figure 7
show an example where late tilting has drastically changed the attitude of the older beds; any gentle structural variations on horizon A would not be seen on the structure map of Figure 6 . The interval map of Figure 7 shows us the structural configuration of the surface A at the time represented by horizon B; the subtle variations are now clearly seen.
Areas and Volumes If our contour map reveals a prospect, and the geologists are comfortable about the likelihood of a source, a reservoir, and a seal, the next step is to form some idea of the economics of the prospect. At the simplest level, experience in the area may suggest that the recoverable reserves are so many tonnes of oil, or so many standard cubic meters of gas, per cubic meter of reservoir rock (or barrels of oil, or standard cubic feet of gas, per acre-foot of reservoir rock) . Then all we have to do is to measure the closed area of the structure, and estimate (or guess) the reservoir thickness. Of course, in measuring the closed area we do not know (in general) that the structure is full; all we can do is to measure the area within the contour that passes through the spill point, and accept that this represents a maximum. On hand-contoured maps the area within the closing contour can be assessed with a planimeter. Because of the risk of slippage, most interpreters like to take two or three measurements, with different base positions; however, most interpreters would probably allow that they are normally surprised by the consistency of the device. On machine-contoured maps, of course, the calculation of closed area needs no more than the specification of the spill contour. The volume estimated from area and thickness is only an approximation, of course. In Figure 1 (a),
Figure 1
where the reservoir is thin, we see that the estimate is a little too small because the area along structure is actually more than the area on the map, but a little too large because the edges of the oil or gas are feathered by the horizontal contact at the spill point. However, these errors are usually minor compared to the general uncertainties of reserves estimation; they are often ignored for reservoirs in which the closure is large compared to the thickness. For thicker reservoirs, the shape of the hydrocarbon volume must be taken into account. In Figure 1 (b) and Figure 1 (c), the closed area is circular, the reservoir thickness is greater than the hydrocarbon column, and the structure is sinusoidal in section; then the volume of rock containing hydrocarbons can be calculated as a solid of revolution. It is only 30% (yes, thirty percent) of the product of closed area and closure. In general, the closed volume can be calculated as the sum of the horizontal slices represented by the contours. As is clear from Figure 1 (d), this improves in accuracy if the contour interval is small (or if we add intermediate contours, on structure, specifically for this purpose). Then the calculation of volume for each horizontal slice is just the contour interval times the area of the contour. Obviously we can be more sophisticated than this if we wish, but again we have to remember that the other uncertainties make it pointless to attempt great precision in reserves estimation. We remember that one of our obligations as interpreters is to communicate to exploration management everything we know about the balance between reward and risk. A familiar situation in this context arises when there are two or more possibilities for the spill point. In Figure 2 ,
Figure 2
for example, the three formations upward from the reservoir represent a good seal, a doubtful seal, and another good seal--how does the reward vary with the risk represented by that doubtful seal? We compute the closed volumes given by both the dotted contacts shown; then we report the two levels of reward, each with its corresponding level of risk. More generally, we can plot the closed volume as a function of a range of assumptions about the spill point, or as a function of the degree to which the structure is full. Where the prospect is cut by a lease line or a block boundary, we need to know how the ownership changes with these variations. Perhaps the other fellow should drill first. The calculation of closed volumes may also be relevant to the judgments we make as we are contouring, or interpreting faults. Thus we have agreed that where we cannot choose between two possible contourings, or two fault linkages, we will generally give first place to the more optimistic interpretation. But the more optimistic interpretation is not necessarily the one that yields the greater closed area, because the different interpretations may lead to different spill points, and hence to different heights of hydrocarbon column. Therefore we have to make both maps, and compute both closed volumes, before we can decide which interpretation is the more optimistic. Two-Dimensional Filtering: Trends and Residuals We accept that it is usual to apply some judicious smoothing as we pick the reflections, and to apply some judicious smoothing as we contour the map. By "judicious" we mean that we consciously exclude any variations that we deem to be noise, or not geologically significant in the context of the exploration play. When we perform the picking or the contouring by machine, we hope that the program makes comparable judgments. In the sense that the picked horizon on the section represents a waveform, we may consider the smoothing of our picks as a low-pass filter. The very localized variations, with their high spatial frequencies, are suppressed. Similarly, the smoothing applied in the contouring is a two-dimensional low-pass filter--applied during the mapping, rather than along the lines. Sometimes, after all this filtering is complete and the map has been made, we wish to select or enhance one of the many messages that the map may contain. Thus, a further two-dimensional low-pass filter, applied to the map itself, could be expected to enhance the low-frequency trends. A two-dimensional high-pass filter, on the other hand, could be expected to suppress these trends, and to enhance the local variations. These filtering operations are most easily visualized on gridded data. Figure 1 illustrates a rectangular grid of data and the machine-contoured map derived from it.
Figure 1
We can make a low-pass (integrating) filter for a waveform by replacing each sample by the average of it and its neighbors; similarly we can make a low-pass filter for a surface by replacing each grid value by the average of it and its neighbors. In Figure 1 , for example, we might replace each grid value by the average of the values falling within a defined circle centered on the grid node; we might use the circle at the upper left, embracing nine grid values. We can see by eye that this would preserve the indication of the high, but remove all the detail. The filtered version would no longer represent "truth," but it would show the trend. We have a trend map. If we were to use a larger circle, even less of the highfrequency content would be preserved; only the gross low-frequency trend would survive. If we were to use an ellipse ( Figure 1 , lower right) instead of a circle, the filter would selectively enhance trends in the direction of the major axis of the ellipse We make a high-pass (differentiating) filter from a simple low-pass filter by subtracting the central sample from the average of the other samples. Similarly we can construct the high-pass filter corresponding to the shaded circle of Figure 1 by subtracting the central grid value from the average of the other eight. On a uniform dip, clearly, the output is zero. But highly local features, standing out from the background, are accentuated. Because the output of such a high-pass filter approximates the difference between the original contour map and the low-pass-filtered trend map, it is often called the residual map. The filter given by a simple running average of sample values is a very primitive filter; the filters of Figure 1 are likewise primitive. One simple variation is to run the filter twice; on a waveform this yields a triangular weighting of sample values, and in two dimensions it yields a conical weighting. As with waveform filters, there is an infinite choice of other filters. For example, Figure 2 reminds us of the so-called "ideal" filter and the "sinc" operator that provides it; we can obtain the two-dimensional equivalent by rotating this operator about its axis of symmetry.
Figure 2
Once again, however, we have to remember that the filter is "ideal" only in the sense of providing a very abrupt separation between passed and blocked frequencies; it is inconceivable that such a filter could be ideal on any real geological surface. Two-dimensional filters for map work are normally symmetrical (i.e., zero-phase). This is not mandatory, of course, but only a very specific geological problem would lead us to consider any other possibility. Map Migration The migration of a contour map is conceptually very simple; each point on a contour is displaced in a direction perpendicular to the contour, by an amount that depends on the dip (that is, on the separation between contours). Figure 1 (m is the time gradient on the unmigrated map in the equations) repeats the standard migration diagram for a constant-velocity earth.
Figure 1
In this instance, however, the horizontal datum line is not the line of section but a line perpendicular to a contour--the line of steepest ascent; the point 0 is not a shot-point but a contour, perpendicular to the paper. The equations given in the figure allow us to migrate the contour, using the time gradient on the unmigrated map. In Figure 2 we see the same thing in plan.
Figure 2
Any point on the t contour, therefore, requires updip displacement by an amount h, and replacement with a new time value t'. When this is done for a suitable number of points on each contour, we have a new set of time points, migrated; these can then be contoured in the usual way. The contour value t, formerly at 0, is now moved to P. For detail studies on important prospects, the assumption of constant velocity may not be sufficiently close. Then it becomes necessary to include not only the vertical and lateral variations of velocity, but also refraction at the major interfaces above the contoured horizon; it becomes necessary to trace the normal- incidence ray. This is a significant complication, which is addressed in the topics under the heading "Velocity Interpretation and Depth Conversion." REFERENCES
Ahmed, H. and Johnstad, S. E. (1990). "Improved Reservoir Mapping by Integration of Surface and Borehole Seismic Data: A Field Case in a Complex Structural Area." Expand Abstract 60 th Annual International Meeting. Tulsa : Society of Exploration Geophysicists. Beyer, L. R. (1994). "Seismic Interpretation 40: Contouring with Center-line Faults: Using Fault Geometry to your Advantage." The Leading Edge, 13 , no. 01, 38-41. Tulsa : Society of Exploration Geophysicists. Beyer, L. R. (1993). "Contouring with Center-line Faults: Using Fault Geometry to your Advantage. " The Leading Edge, 12 , no. 12, 1157-1160. Tulsa: Society of Exploration Geophysicists. Bolondi, G., Rocca, F. and Zanoletti, S. (1977). "Methods for Contouring Irregularly Spaced Data." Geophysical Prospectining, 25 , 96-119. Houten: European Association of Geoscientists & Engineers. Bolondi, G., Rocca, F. and Zanoletti, S. (1976). "Automatic Contouring of Faulted Subsurfaces." Geophysics, 41 , 1377-1393 . Tulsa: Society of Exploration Geophysicists. Britze, P. (1998). "Interpreter's Corner - Seismic Mapping Using Trendform gridding." The Leading Edge, 17 , no. 05, 606-608. Tulsa: Society of Exploration Geophysicists. Brown, L. F. and W. L. Fisher. 1977. Seismic-stratigraphic interpretation of depositional systems: Examples from Brazilian rift and pull-apart basins. In Seismic stratigraphy--Applications to hydrocarbon exploration, ed. C. E. Payton, 213-248. Tulsa: Am. Assoc. Petr. Geol. Chiaruitini, C., Galuppo, P., Pipan, M. and Roberto, V. (1991). "Computer Aided Fault Pattern Analysis (CAFPA): A Prototype of an Expert System for Seismic Data Contouring." Abstract 53 rd Annual Meeting. Houten: European Association of Geoscientists & Engineers. Dutton, S. P. 1982. Pennsylvanian fan-delta and carbonate deposits, Mobeetie field, Texas panhandle. Am. Assoc. Petr. Geol. Bull. 66(3):389-4C7. Gariepy, B. L. (1989). "Discussion on 'Seismic Contouring: A Unique Skill', by P. M. Tucker." Geophysics, 54 , 130 . Tulsa: Society of Exploration Geophysicists. Gert, A., Melnikov, P. N. and Nemova, O. (2000). "Value Estimation of Oil Reserves and Resources in Russia (By The Example Of West Siberia." Abstract 31 th International Geological Congress. Rio de Janeiro: Brazilian Geological Society. Goudswaard, W. 1986. Seismic interpretation. EAEG Continuing Education Course. Hansen, R. O. (1993). "Interpretive Gridding by Anisotropic Kriging." Geophysics, 58 , 1491-1497 . Tulsa: Society of Exploration Geophysicists. Harding, T. P. and S. D. Lowell. 1979. Structural styles, their plate-tectonic habitats, and hydrocarbon traps in petroleum provinces. Am. Assoc. Petr. Geol. Bull. 66(7) :1016-1058. Jackson, P., Mason, I. M., Tu, P. and Zhou, B. (1995). "Investigation of a Radical Approach to Seismic Mapping to Faults." Abstract 57 th Annual Meeting. Houten: European Association of Geoscientists & Engineers. Jones, T. A. and C. R. Johnson. 19 83. Stratigraphic relationships and geologic history depicted by computer mapping. Am. Assoc. Petr. Geol. Bull. 67(9):l415- 1421. Kirkham, D. J. and Muscatello, D. J. (1987). "Map Migration and 3D Modelling in a Complex Velocity Medium": 5th Geophysical Conference. Sydney: Australian Society of Exploration Geophisicists. Lee, R. (2001). "Pitfalls in Seismic Data Flattening." The Leading Edge, 20 , 12, 160-164. Tulsa: Society of Exploration Geophysicists. Li, X. and Gotze, H. J. (1999) "Comparison of Some Gridding Methods." The Leading Edge, 18 , 8, 898-900. Tulsa: Society of Exploration Geophysicists. Maher, S. M., Thorson, J. R., Hadley, D. M. and Swanger, H. J. (1987). "Study of Comparative Interval Velocities for Map Migration." Abstract 57 th Annual International Meeting. Tulsa: Society of Exploration Geophysicists. Maher, S. M. and Hadley, D. M. (1985). "Development of an Accurate, Stable, and Interactive Map Migration Algorithm." Abstract 55 th Annual International Meeting. Tulsa: Society of Exploration Geophysicists. Nagea, A. (1997). "Integrated Deterministic/Probabilistic Reserve Estimation." Journal of Petroleum Technology, 49, 12, 1365-1366 . Richardson: Society of Petroleum Engineer. McConnell, J. R., Chen, F. Q. and Busby, N. L. (1988). "Seismic Mapping of Deltaic Sands." Abstract 58 th Annual International Meeting. Tulsa: Society of Exploration Geophysicists. Mesko, C. A. (1966). "Two-dimensional Filtering and the Second Derivative Method" Geophysics, 31 , 601-617 . Tulsa: Society of Exploration Geophysicists. Neff, D. B. (1990). "Estimated Pay Mapping Using Three-Dimensional Seismic Data and Incremental Pay Thickness Modeling." Geophysics, 53 , 567-575 . Tulsa: Society of Exploration Geophysicists. O'Brien, C. F. (1989). "Discussion on 'Seismic Contouring: A Unique Skill', by P. M. Tucker." Geophysics, 54 , 1651-1654 . Tulsa: Society of Exploration Geophysicists. Robertson, C. (1990). "Geophysical Uses and Algorithms for 3-D Gridding." Abstract 60 th Annual International Meeting. Tulsa: Society of Exploration Geophysicists. Robinson, J. E., H. A. K. Charlesworth, and M. J. Ellis. 1969. Structural analysis using spatial filtering in interior plains of south-central Alberta. Am. Assoc. Petr. Geol. Bull. 53(11) :2341-2367. Sattlegger, J. W. and H. Egbers. 1 987. Three-dimensional mapping of horizons picked on two-dimensionally migrated seismic sections. Presented at the 5th Conference and Exhibition of the Australian SEG, Perth. Sattlegger, J. (1985). "Map Migration and Modeling Algorithm" Abstract 55 th Annual International Meeting. Tulsa: Society of Exploration Geophysicists. Slootweg, A. P (1978). "Computer Contouring with a Digital Filter." Marine Geophysical Research, 3, 4, 401-405. Dordrecht: D. Reidel Publishing Company. Sattlegger, J. and Zien, H. (1998). "3D Standard and Image Ray Map Migration/Modelling as Related to Reservoir Geophysics." Abstract 60 th Annual Meeting. Houten: European Association of Geoscientists & Engineers. Stolt, R. H. (2002). "Seismic Data Mapping and Reconstruction." Geophysics, 67 , 890-908 . Tulsa: Society of Exploration Geophysicists. Stone, D. S. (1989). "Discussion on 'Seismic Contouring: A Unique Skill', by P. M. Tucker." Geophysics, 54 , 267-270 . Tulsa: Society of Exploration Geophysicists. Tucker, P. M. (1988). "Discussion on 'Seismic contouring: A Unique Skill', by P. M. Tucker." Geophysics, 53 , 741-749 . Tulsa: Society of Exploration Geophysicists. Walters, R. F. 1969. Contouring by machine--A user's guide. Am. Assoc. Petr. Geol. Bull. 53(1l):2324-2340. Withjack, M. O. and C. Scheiner. 1982. Fault patterns associated with domes-- An experimental and analytical study. Am. Assoc. Petr. Geol. Bull. 66(3):302-316. ADDITIONAL READING
Brown, A. (1999). Interpretation of Three-dimensional Seismic Data. Tulsa : Society of Exploration Geophysicists. Jenyon, M. K. and A. A. Fitch. 1985. Seismic reflection interpretation. Berlin : Gebruder Borntraeger. Jones, T. A., D. E. Hamilton, and C. R. Johnson. 1986. Contouring geologic surfaces with the computer. New York : Van Nostrand Reinhold. Kleyn, A. H. 1983. Seismic reflection interpretation. London : Applied Science Publishers. Tearpock, D. J. and Bischke, R. (1991). Applied Subsurface Gological Mapping. New York: Prentice Hall. Tearpock, D. J. and Bischke, R. (2002). Applied Subsurface Gological Mapping with Structural Methods. New York: Prentice Hall. Yilmaz, O. (2001). Seismic Data Analysis: Processing, Inversion and Interpretation of Seismic Data. Tulsa: Society of Exploration Geophysicists
Ray Migration and Demigration as a Tool in Handling Misties (Acknowledgment: This appendix is derived from an outline due to W. Goudswaard, used in his recommended EAEG course "Seismic Interpretation.") In Figure 1 , the full lines show the basic geometry of the migration process for a constant velocity V; the reflection "point" is displaced horizontally by a distance h, and its time changes from t to t'.
Figure 1
The short-dashed lines show the corresponding geometry for a source position removed from the original source position by a small distance x; the corresponding time difference At allows us to define the time dip t/ x = m before migration. The corresponding time dip after migration is t'/x' = m'. These quantities are related to the true dip angle by: sin =Vm/2 ; tan = Vm'/2 . Also, in the shaded triangle, sin =2h/Vt =Vm/2 ; tan = 2h/Vt' = Vm'/2 . Hence, h may be obtained from the section either before or after migration: h =V 2 tm/4 =V 2 t'm'/4 . Similarly, we have: cos =t'/t = sec = t'/t = On unmigrated data, then, we know V, t and m; hence, h =V 2 tm/4 and t' =t . We can calculate what the mistie will be when dip and strike lines are migrated. On migrated data, we know V, t' and m'; hence, h =V 2 t'm'/4 and t' =t' . We can "demigrate" the data, and satisfy ourselves that a mistie between dip and strike lines is removed thereby. The equations also give us the amount by which a strike line must be repositioned updip (h), for track-line construction, and the modified time value (t') that must be used in contouring the migrated data.