You are on page 1of 24

Chapter 3 Scan Conversion

Many pictures, from 2D drawings to projected views of 3D objects, consist of graphical


primitives such as points, lines, circles, and filled polygons. These picture components are often
defined in a continuous space at a higher level of abstraction than individual pixel in the discrete
image space. For instance, a line is defined by its two endpoints and the line equation, whereas a
circle is defined by its radius, center position, and the circle equation. It is the responsibility of
the graphics system of the application program to convert each primitive from its geometric
definition into a set of pixels that make up the primitive in the image space. This conversion task
is generally referred to as scan conversion or rasterization.
The focus of this chapter is on the mathematical and algorithmic* aspects of scan
conversion. We discuss ways to handle several commonly encountered primitives including
points, lines, circles, ellipses, character, and filled regions in an efficient and effective manner.
We also discuss techniques that help to smooth out the discrepancies* between the original
element and its discrete approximation. The implementation of these algorithms and
mathematical solutions (and many others in subsequent chapters) varies from one system to
another and can be in the form of various combinations of hardware, firmware*, and software.

3.1 Scan-Converting a Point


A mathematical point (x, y) where x and y are real numbers within an image area, needs to
be scan-converted to a pixel at location (x, y). This may be done by making x to be the integer
part of x, and y the integer part of y. In other words, x = int(x) and y = int(y), where function
Floor returns the largest integer that is less than or equal to the argument*. Doing so in essence*
-1-

places the origin of a continuous coordinate system for (x, y) at the lower left corner of the pixel
grid in the image space [see Fig. 3-1(a)]. All points that satisfy x x < x + 1 and y y < y + 1
are mapped to pixel (x, y). For example, point P1(1.7, 0.8) is represented by pixel (1, 0). Points
P2(2.2, 1.3) and P3(2.8, 1.9) are both represented by pixel (2, 1).
Another approach is to align the integer values in the coordinate system for (x, y) with the
pixel coordinates [see Fig. 3-1(b)]. Here we can convert (x, y) by making x = Floor(x + 0.5) and
y = Floor(y + 0.5). This essentially places the origin of the coordinate system for (x, y) at the
center of pixel (0, 0). All points that satisfy x 0.5 x < x + 0.5 and y 0.5 y < y + 0.5 are
mapped to pixel (x, y). This means that points P1 and P2 are now both represented by pixel(2, 1),
whereas point P3 is represented by pixel (3, 2).
We will assume, in the following sections, that this second approach to coordinate system
alignment is used. This all pixel are centered at the integer values of a continuous coordinate
system where abstract graphical primitives are defined.

3.2 Scan-Converting a Line


A line in computer graphics typically refers to a line segment, which is a portion of a
straight line that extends indefinitely in opposite directions. It is defined by its two endpoints and
the line equation y = mx + b, where m is called the slope and b the y intercept* of the line. In Fig.
3-2 the two endpoints are described by P1(x1, y1) and P2(x2, y2). The line equation describes the
coordinates of all the points that lie between the two endpoints.

A note of caution: this slope-intercept equation* is not suitable for vertical lines. Horizontal,
vertical, and diagonal (|m| = 1) lines can, and often should, be handled as special cases without
going through the following scan-conversion algorithms. These commonly used lines can be
mapped to the image space in a straightforward fashion for high execution efficiency.

Direct Use of the Line Equation


A simple approach to scan-converting a line is to first scan-convert P1 and P2 to pixel
coordinates (x1, y1) and (x2, y2), respectively; then set m = (y2 - y1)/(x2 - x1) and b = y1 - m
x1. If |m| 1, then for every integer value of x between and excluding* x1 and x2, calculate the
-2-

corresponding value of y using the equation and scan-convert (x, y). If |m| > 1, then for every
integer value of y between and excluding y1 and y2, calculate the corresponding value of x using
the equation and scan-convert (x, y).
While this approach is mathematically sound*, it involves floating-point computation*
(multiplication and addition) in every step that uses the line equation since m and b are generally
real numbers. The challenge is to find a way to achieve the same goal as quickly as possible.
An example: Draw a line from point (0, 0) to point (4, 4.8).
1 (0, 0) (0, 0)
Solution: (1)
(8, 4.8) (8, 5)
2
m = (5-0) / (8-0) = 0.625 b = 0
3 m<1 xi+1 = xi+1 = 1

yi+1 = mxi+1+b = 0.625 xi+1 = 0.625

(2) Data Table


i

xi

yi

0.625

1.25

1.875

2.5

3.125

3.75

4.375

[yi]

(xi, yi)

(1, 1)

(2, 1)

(3, 2)

(4, 3)

(5, 3)

(6, 4)

(7, 4)

(8, 5)

(3) Plot the Figure

DDA Algorithm
The digital differential analyzer (DDA) algorithm is an incremental* scan-conversion
method. Such an approach is characterized by performing calculations at each step using results
from the preceding step.
-3-

Suppose at step i we have calculated (xi, yi) to be a point on the line. Since the next point
(xi+1, yi+1) should satisfy y / x = m where y = yi+1 - yi and x = xi+1 - xi, we have
yi+1 = yi + mx
or
xi+1 = xi +y / m
These formulas are used in the DDA algorithm as follows. When |m| 1, we start with x =
x 1, (assuming that x1 < x2) and y = y1, and set x = 1 (i.e., unit increment in the x direction).
The y coordinate of each successive point on the line is calculated using yi+1 = yi + m. When |m|
> 1, we start with x = x1 and y = y1 (assuming that y1 < y2), and set y = 1 (i.e., unit increment
in the y direction). The x coordinate of each successive point on the line is calculated using xi+1 =
xi +1 / m. This process continues until x reaches x2 (for the |m| < 1 case) or y reaches y2 (for the
|m| > 1 case) and all points found are scan-converted to pixel coordinates.
The DDA algorithm is faster than the direct use of the line equation since it calculates
points on the line without any floating-point multiplication. However, a floating-point addition is
still needed in determining each successive point. Furthermore, cumulative error due to limited
precision in the floating-point representation may cause calculated points to drift away from
their true position when the line is relatively long.

The DDA algorithm can be stated as follows:


begin
if abs (x2-x1)abs(y2-y1)
then length = abs (x2-x1)
else length = abs(y2-y1)
endif
x = (x2-x1) / length
y = (y2-y1) / length
k = 1;
x = x1;
y = y1;
while (k length + 1)
putpixel(x, y)
k = k + 1;
x = x + x;
y = y + y;
endwhile
end

-4-

An example: Draw a line from point (0, 0) to point (4, 7) with the DDA algorithm.
Solution: (1) Write the DDA algorithm. (omitted)
(2) Operation and data table
(0, 0) (0, 0)
(4, 7) (4, 7)
abs(4-0) = 4
putpixel
(x, y)

abs(7-0) = 7

length = 7

x=4/7

y =7/7=1.0

(0, 0)

(0, 1)

(1, 2)

(2, 3)

(2, 4)

(3, 5)

(3, 6)

(4, 7)

4/7

8/7

12/7

16/7

20/7

24/7

32/7

Outside
while loop

Inside while loop

Outside
while loop

(3) Plot the figure

Bresenhams Line Algorithm


Bresenhams line algorithm is a highly efficient incremental method for scan-converting
lines. It produces mathematically accurate results using only integer addition, subtraction, and
multiplication by 2, which can be accomplished by a simple arithmetic shift operation.
The method works as follows. Assume that we want to scan-convert the line shown in Fig.
3-3 where 0 < m < 1. We start with pixel P1(x1, y1), then select subsequent pixels as we work
our way to the right, one pixel position at a time in the horizontal direction towards P2(x2, y2).
Once a pixel is chosen at any step, the next pixel is either the one to its right (which constitutes a
-5-

lower bound for the line) or the one to its right and up (which constitutes an upper bound for the
line) due to the limit on m. The line is best approximated by those pixels that fall the least
distance from its true path between P1 and P2.
P

Using the notation of Fig. 3-3, the coordinates of the last chosen pixel upon entering step i
are (xi, yi). Our task is to choose the next one between the bottom pixel S and the top pixel T. If S
is chosen, we have xi+1 = xi +1 and yi+1 = yi. If T is chosen, we have xi+1 = xi +1 and yi+1 = yi +1.
The actual y coordinate of the line at x = xi+1 is y = mxi+1 + b = m(xi + 1) + b. The distance from S
to the actual line in the y direction is s = y - yi. The distance from T to the actual line in the y
direction is t = (yi + 1) - y.
Now consider the difference between these two distance values: s - t. When s - t is less
than zero, we have s < t and the closest pixel is S. Conversely, when s - t is greater than zero, we
have s > t and the closest pixel is T. We also choose T when s - t is equal to zero. This difference
is
s t = (y - yi) [(yi +1) - y]
= 2y - 2yi 1 = 2m(xi + 1) + 2b - 2 yi - 1
Substituting m by y / x and introducing a decision variable di = x(st), which has the same
sign as (s-t) since x is positive in our case, we have
di = 2y * xi - 2x * yi + C where C = 2y +x(2b1)
Similarly, we can write the decision variable di+1 for the next step as
di+1 = 2y * xi+1 - 2x * yi+1 + C
Then
-6-

di+1 - di = 2y (xi+1 - xi) - 2x (yi+1 - yi )


Since xi+1 = xi +1, we have
di+1 = di + 2y - 2x (yi+1 - yi )
If the chosen pixel is the top pixel T (meaning that di 0) then yi+1 = yi +1 and so
di+1 = di + 2(y -x)
On the other hand, if the chosen pixel is the bottom pixel S (meaning that di < 0) then yi+1 =
yi and so
di+1 = di + 2y
Hence we have
d i + 2(y x)
d i +1 =
d i + 2y

if d i 0
if d i < 0

Finally, we calculate d1, the base case value for this recursive* formula, from the original
definition of the decision variable di:
d1 = x[2m(x1 + 1) + 2b 2 y1 1] = x[2(mx1 + b y1) + 2m 1]
Since mx1 + b y1 = 0, we have
d1 = 2y -x
In summaryBresenhams algorithm for scan-converting a line from P1(x1, y1), to P2(x2,
P

y2), with x1 < x2 and 0 < m < 1 can be stated as follows:


int x = x1, y = y1
int dx = x2 - x1, dy = y2 - y1, dT = 2(dy - dx), dS = 2dy;
int d = 2dy - dx;
setPixel(x, y);
while (x < x2) {
x++;
if (d < 0)
d = d + dS;
else {
y++;
d = d + dT;
}
setPixel(x, y);
}
-7-

Here we first initialize* decision variable d and set pixel P1. During each iteration of the
while loop, we increment x to the next horizontal position, then use the current value of d to
select the bottom or top (increment y) pixel and update d, and at the end set the chosen pixel.
As for lines that have other m values we can make use of the fact that they can be mirrored
either horizontally, vertically, or diagonally* into this 0 to 45 angle range. For example, a line
from (x1, y1) to (x2, y2) with -1 < m < 0 has a horizontally mirrored counterpart* from (x1, -y1)
to (x2, -y2) with 0 < m < 1. We can simply use the algorithm to scan-convert this counterpart,
but negate the y coordinate at the end of each iteration to set the right pixel for the line. For a
line whose slope is in the 45 to 90 range, we can obtain its mirrored counterpart by exchanging
the x and y coordinates of its endpoints. We can then scan-convert this counterpart but we must
exchange x and y in the call to setPixel.
P

An example: Draw a line from the Point P1(0, 0) to Point P2(10, 6) with the Bresenham
Algorithm.
Solution: (1) Write the Bresenham algorithm. (omitted)
(2) Data Table.
x=0, y=0, dx=10-0=10, dy=6-0=6, dT=2*(6-10)=-8, dS=2*6=12, d=2*6-10=2, setPixel(0,0)
while
(xx2)

while (x< x2)


x

-6

-2

(x, y)

(1, 2)

(2, 1)

(3, 2)

10

-6

-2

10

(4, 2)

(5, 3)

(6, 4)

(7, 4)

(8, 5)

(9, 5)

(10, 6)

(3) Plot the figure.

-8-

10
6

3.3 Scan-Converting a Circle


A circle is a symmetrical figure. Any circle-generating algorithm can take advantage of the
circle's symmetry to plot eight points for each value that the algorithm calculates. Eight-way
symmetry is used by reflecting each calculated point around each 45 axis. For example, if point
1 in Fig. 3-4 were calculated with a circle algorithm, seven more points could be found by
reflection. The reflection is accomplished by reversing the x, y coordinates as in point 2,
reversing the x, y coordinates and reflecting about the y axis as in point 3, reflecting about the y
axis as in point 4, switching the signs of x and y as in point 5, reversing the x, y coordinates,
reflecting about the y axis and reflecting about the x axis as in point 6, reversing the x, y
coordinates and reflecting about the y axis as in point 7, and reflecting about the x axis as in
point 8.
To summarize:
P1 = (x, y)
P5 = (-x, -y)
P2 = (y, x)
P6 = (-y, -x)
P3 = (-y, x)
P7 = (y, -x)
P4 = (-x, y)
P8 = (x, -y)

Defining a Circle
There are two standard methods of mathematically defining a circle centered at the origin.
The first method defines a circle with the second-order polynomial* equation (see Fig. 3-5)
y2 = r2 x2
where x = the x coordinate
y = the y coordinate
r = the circle radius
-9-

With this method, each x coordinate in the sector, from 90 to 45, is found by stepping x
from 0 to r / 2 , and each y coordinate is found by evaluating r 2 x 2 for each step of x. This is
a very inefficient method, however, because for each point both x and r must be squared and
subtracted from each other; then the square root of the result must be found.
The second method of defining a circle makes use of trigonometric* functions (see Fig. 3-6):
x = r cos

y = r sin

where = current angle


r = circle radius
x = x coordinate
y = y coordinate
By this method, is stepped from 0 to /4, and each value of x and y is calculated. However,
computation of the values of sin and cos is even more time-consuming than the calculations
required by the first method.

- 10 -

Bresenhams Circle Algorithm


If a circle is to be plotted efficiently, the use of trigonometric and power functions must be
avoided. And, as with the generation of a straight line, it is also desirable to perform the
calculations necessary to find the scan-converted points with only integer addition, subtraction,
and multiplication by powers of 2. Bresenham's circle algorithm allows these goals to be met.
Scan-converting a circle using Bresenhams algorithm works as follows. If the eight-way
symmetry of a circle is used to generate a circle, points will only have to be generated through a
45 angle. And, if points are generated from 90 to 45, moves will be made only in the +x and -y
directions (see Fig. 3-7).

- 11 -

The best approximation of the true circle will be described by those pixels in the raster that
fall the least distance from the true circle. Examine Fig. 3-8. Notice that, if points are generated
from 90 and 45, each new point closest to the true circle can be found by taking either of two
actions: (1) move in the x direction one unit or (2) move in the x direction one unit and move in
the negative y direction one unit. Therefore, a method of selecting between these two choices is
all that is necessary to find the points closest to the true circle.
Assume that (xi, yi) are the coordinates of the last scan-converted pixel upon entering step
i (see Fig.3-8). Let the distance from the origin to pixel T squared minus* the distance to the true
circle squared = D(T). Then let the distance from the origin to pixel S squared minus the distance
to the true circle squared = D(S). As the coordinates of T are (xi + 1, yi) and those of S are (xi + 1,
yi - 1), the following expressions can be developed:
D(T) = (x i + 1) 2 + y i2 - r 2

D(S) = (x i + 1) 2 + (y i - 1) 2 - r 2

This function D provides a relative measurement of the distance from the center of a pixel
to the true circle. Since D(T) will always be positive (T is outside the true circle) and D(S) will
always be negative (S is inside the true circle), a decision variable di may be defined as follows:
di = D(T) + D(S)
Therefore
d i = 2( xi + 1) 2 + yi2 + ( yi 1) 2 2r 2

When di < 0, we have |D(T)| < |D(S)| and pixel T is chosen. When di 0, we have |D(T)|
|D(S)| and pixel S is selected. We can also write the decision variable di+1 for the next step:
d i +1 = 2( xi +1 + 1) 2 + yi2+1 + ( yi +1 1) 2 2r 2

Hence
d i +1 d i = 2( xi +1 + 1) 2 + yi2+1 + ( yi +1 1) 2 2( xi + 1) 2 yi2 ( yi 1) 2

Since xi+1 = xi +1, We have


d i +1 = d i + 4 xi + 2( yi2+1 yi2 ) 2( yi +1 yi ) + 6

If T is the chosen pixel (meaning that di < 0) then yi+1 = yi and so


d i +1 = d i + 4 xi + 6

On the other hand, if S is the chosen pixel (meaning that di 0) then yi+1 = yi 1 and so
d i +1 = d i + 4( xi yi ) + 10

Hence we have

- 12 -

d i + 4 xi + 6
d i +1 =
d i + 4( xi yi ) + 10

if d i < 0
if d i 0

Finally, we set (0, r) to be the starting pixel coordinates and compute the base case value d1
for this recursive formula from the original definition of di:

d1 = 2(0 + 1) 2 + r 2 + ( r 1) 2 2r 2 = 3 2r
We can now summarize the algorithm for generating all the pixel coordinates in the 90 to
45 octant* that are needed when scan-converting a circle of radius r:

int x = 0, y = r, d = 3 - 2r;
while (x <= y) {
setPixel(x, y);
if (d < 0)
d = d + 4x + 6;
else {
d = d + 4(x - y) +10;
y--;
}
x++;
}

Note that during each iteration of the while loop we first set a pixel whose position has
already been determined, starting with (0, r). We then test the current value of decision variable
d in order to update d and determine the proper y coordinate of the next pixel. Finally we
increment x.
An example: Indicate which raster locations would be chosen by Bresenhams algorithm when
scan-converting a circle centered at the origin with the radius of 8 pixels (from 90 to 45).
Solution: (1) Write the algorithm. (omitted).
(2) Data Table.
x = 0, y = 8, d = 3-2*8 = -13
while (xy)

while (x>y)

(x, y)

(0, 8)

(1, 8)

(2, 8)

(3, 7)

(4, 7)

(5, 6)

-7

-11

11

y
x

7
1

4
- 13 -

(3) Plot the figure.

3.4 Attributes of Primitives


In general, any parameter that affects the way a primitive is to be displayed is referred to as
an attribute parameter. Some attribute parameters, such as color and size, determine the
fundamental characteristics of a primitive. Others specify how the primitive is to be displayed
under special conditions. Examples of attributes in this class include depth information for
three-dimensional viewing and visibility or detectability* options for interactive object-selection
programs. These special-condition attributes will be considered in later chapters. Here, we
consider only those attributes that control the basic display properties of primitives, without
regard for* special situations. For example, lines can be dotted or dashed, fat or thin, and blue or
orange. Areas might be filled with one color or with a multicolor pattern. Text can appear
reading from left to right, slanted diagonally across the screen, or in vertical columns. Individual
characters can be displayed in different fonts, colors, and sizes. And we can apply intensity
variations at the edges of objects to smooth out the raster stairstep effect.
One way to incorporate attribute options into a graphics package is to extend the parameter
list associated with each output primitive function to include the appropriate attributes. A
line-drawing function, for example, could contain parameters to set color, width, and other
properties, in addition to endpoint coordinates. Another approach is to maintain a system list of
current attribute values. Separate functions are then included in the graphics package for setting
the current values in the attribute list. To generate an output primitive, the system checks the
relevant attributes and invokes the display routine for that primitive using the current attribute
- 14 -

settings. Some packages provide users with a combination of attribute functions and attribute
parameters in the output primitive commands. With the GKS and PHIGS standards, attribute
settings are accomplished with separate functions that update a system attribute list.
Basic attributes of a straight line segment are its type, its width, and its color. In some
graphics packages, lines can also be displayed using selected pen or brush options. In the
following sections, we consider how line-drawing routines can be modified to accommodate
various attribute specifications.

3.4.1 Line Type


Possible selections for the line-type attribute include solid lines, dashed* lines, and dotted
lines. We modify a line-drawing algorithm to generate such lines by setting the length and
spacing of displayed solid sections along the line path. A dashed line could be displayed by
generating an interdash spacing that is equal to the 1/4 length of the solid sections. Both the
length of the dashes and the interdash spacing are often specified as user options. A dotted line
can be displayed by generating very short dashes with the spacing equal to or greater than the
dash size. Similar methods are used to produce other line-type variations.
To set line type attributes in a C language application program, a user invokes the function
setLinetype (lt)
where parameter it is assigned a positive integer value of 1,2,3, or 4 to generate lines that are,
respectively, solid, dashed, dotted, or dash-dotted. Other values for the line-type parameter it
could be used to display variations in the dotdash patterns. Once the line-type parameter has
been set in a PHKS application program, all subsequent line-drawing commands produce lines
with this Line type.

3.4.2 Line Width


Implementation of line- width options depends on the capabilities of the output device. A
heavy line on a video monitor could be displayed as adjacent parallel lines, while a pen plotter
might require pen changes. As with other PHIGS attributes, a line-width command is used to set
the current line-width value in the attribute list. This value is then used by line-drawing
algorithms to control the thickness of lines generated with subsequent output primitive
commands.
We set the line-width attribute with the command:
setLinewidthScaleFactor(lw)
Line-width parameter lw is assigned a positive number to indicate the relative width of the line
to be displayed. A value of 1 specifies a standard-width line. On a pen plotter, for instance, a
user could set lw to a value of 0.5 to plot a line whose width is half that of the standard line.
Values greater than 1 produce lines thicker than the standard.
- 15 -

3.4.3 Line Color


When a system provides color (or intensity) options, a parameter giving the current color
index is included in the list of system-attribute values. A polyline routine displays a line in the
current color by setting this color value in the frame buffer at pixel locations along the line path
using the setpixel procedure. The number of color choices depends on the number of bits
available per pixel in the frame buffer.
We set the line color value in PHIGS with the function
setPolylineColourIndex(lc)
Nonnegative integer values, corresponding to allowed color choices, are assigned to the line
color parameter lc. A line drawn in the background color is invisible, and a user can erase a
previously displayed line by respecifying it in the background color (assuming the line does not
overlap more than one background color area).
An example of the use of the various line attribute commands in an applications program is
given by the following sequence of statements:
Setlinetype(2);
setLinewidthScaleFactor (2);
setPolylineColourIndex (5);
polyline (nl, wcpointsl);
setPolylineClourIndex (6);
polyline(n2cpoints2);
This program segment would display two figures, drawn with double-wide dashed lines.
The first is displayed in a color corresponding to code 5, and the second in color 6.

- 16 -

3.5 Region Filling


Region filling is the process of coloring in a definite image area or region. Regions may
be defined at the pixel or geometric level. At the pixel level, we describe a region either in terms
of the bounding pixels that outline it or as the totality of pixels that comprise it (see Fig, 3-9). In
the first case the region is called boundary-defined and the collection of algorithms used for
filling such a region are collectively called boundary-fill algorithms. The other type of region is
called an interior-defined region and the accompanying algorithms are called flood-fill
algorithms. At the geometric level a region is defined or enclosed by such abstract contouring
elements as connected lines and curves. For example, a polygonal* region, or a filled polygon, is
defined by a closed polyline, which is a polyline (i.e., a series of sequentially connected lines)
that has the end of file last line connected to file beginning of file first line.

4-Connected vs. 8-Connected


An interesting point here is that, while a geometrically defined contour clearly separates the
interior of a region from the exterior, ambiguity may arise when an outline consists of discrete
pixels in the image space. There are two ways in which pixels are considered connected to each
other to form a "continuous" boundary. One method is called 4-connected, where a pixel may
have up to four neighbors [see Fig. 3-10(a)]; the other is called 8-connectecd where a pixel may
have up to eight neighbors [see Fig. 3-10(b)]. Using the 4-connected approach, the pixels in Fig.
3-10(c) do not define a region since several pixels such as A and B are not connected. However
using the 8-connected definition we identify a triangular region.

We can further apply the concept of connected pixels to decide if a region is connected to
another region. For example, using the 8-connected approach, we do not have an enclosed region
in Fig. 3-10(c) since "interior" pixel C is connected to "exterior" pixel D. On the other hand, if
we use the 4-connected definition we have a triangular region since no interior pixel is
connected to the outside.
Note that it is not a mere coincidence that the figure in Fig. 3-10(c) is a boundary-defined
- 17 -

region when we use the 8-connected definition for the boundary pixels and the 4-connected
definition for the interior pixels. In fact, using the same definition for both boundary and interior
pixels would simply result in contradiction. For example, if we use the 8-connected approach we
would have pixel A connected to pixel B (continuous boundary) and at the same time pixel C
connected to pixel D (discontinuous boundary). On the other hand, if we use the 4-connectd
definition we would have pixel A disconnected from pixel B (discontinuous boundary) and at the
same time pixel C disconnected from pixel D (continuous boundary).

Figure 3-11
Example color boundaries for a boundary-fill procedure.

Boundary-Fill Algorithm
Another approach to area filling is to start at a point inside a region and paint the interior
outward toward the boundary. If the boundary is specified in a single color, the fill algorithm
proceeds outward pixel by pixel until the boundary color is encountered. This method, called the
boundary-fill algorithm, is particularly useful in interactive painting packages, where interior
points are easily selected. Using a graphics tablet or other interactive device, an artist or designer
can sketch a figure outline, select a fill color or pattern from a color menu, and pick an interior
point. The system then paints the figure interior. To display a solid color region (with no border),
the designer can choose the fill color to be the same as the boundary color.
A boundary-fill procedure accepts as input the coordinates of an interior point (x, y), a fill
color, and a boundary color. Starting from (x, y), the procedure tests neighboring positions to
determine whether they are of the boundary color. If not, they are painted with the fill color, and
their neighbors are tested. This process continues until all pixels up to the boundary color for the
area have been tested. Both inner and outer boundaries can be set up to specify an area, and
some examples of defining regions for boundary fill are shown in Fig. 3-11.
Figure 3-12 shows two methods for proceeding to neighboring pixels from the current test
position. In Fig. 3-12(a), four neighboring points are tested. These are the pixel positions that are
right, left, above, and below the current pixel. Areas filled by this method are called 4-connected.
The second method, shown in Fig. 3-12(b), is used to fill more complex figures. Here the set of
- 18 -

neighboring positions to be tested includes the four diagonal pixels. Fill methods using this
approach are called 8-connected. An 8-connected boundary-fill algorithm would correctly fill the
interior of the area defined in Fig. 3-13, but a 4-connected boundary-fill algorithm produces the
partial fill shown.

Figure 3-12
Fill methods applied to a
4-connected area (a) and to
an 8-connected area (b).
Open
circles
represent
pixels to be tested from the
current test position, shown
as a solid color.

The following procedure illustrates a recursive method for filling a 4-connected area with
an intensity specified in parameter fill up to a boundary color specified with parameter boundary.
We can extend this procedure to fill an 8-connected region by including four additional
statements to test diagonal positions, such is (x+1, y+1).

void boundaryFill4 (int x, int y, int fill, int boundary)


{
int current;
current = getpixel (x, y);
if ((current != boundary) && (current != fill))
{
setcolor (fill);
setpixel (x, y);
boundaryFill4 (x+1, y, fill, boundary);
boundaryFill4 (x-1, y, fill, boundary);
boundaryFill4 (x, y+l, fill, boundary);
boundaryFill4 (x, y-1, fill, boundary);
}
}
- 19 -

Recursive boundary-fill algorithms may not fill regions correctly if some interior pixels are
already displayed in the fill color. This occurs because the algorithm checks next pixels both for
boundary color and for fill color. Encountering a pixel with the fill color can cause a recursive
branch to terminate, leaving other interior pixels unfilled. To avoid this, we can first change the
color of any interior pixels that are initially set to the fill color before applying the boundary-fill
procedure.
Also, since this procedure requires considerable stacking of neighboring points, more
efficient methods are generally employed. These methods fill horizontal pixel spans across scan
lines, instead of proceeding to 4-connected or 8-connected neighboring points. Then we need
only stack a beginning position for each horizontal pixel span, instead of stacking all
unprocessed neighboring positions around the current position. Starting from the initial interior
point with this method, we first fill in the contiguous* span of pixels on this starting scan line.
Then we locate and stack starting positions for spans on the adjacent scan lines, where spans are
defined as the contiguous horizontal string of positions bounded by pixels displayed in the area
border color. At each subsequent step, we unstack the next start position and repeat the process.

Figure 3-13
The area defined within the color boundary (a) is only partially
filled in (b) using a 4-connected boundary-fill algorithm.
An example of how pixel spans could be filled using this approach is illustrated for the
4-connected fill region in Fig. 3-14. In this example, we first process scan lines successively
from the start line to the top boundary. After all upper scan lines are processed, we fill in the
pixel spans on the remaining scan lines in order down to the bottom boundary. The leftmost
pixel position for each horizontal span is located and stacked, in left to right order across
successive scan lines, as shown in Fig. 3-14. In (a) of this figure, the initial span has been filled,
and starting positions 1 and 2 for spans on the next scan lines (below and above) are stacked. In
Fig. 3-14(b), position 2 has been unstacked and processed to produce the filled span shown, and
the starting pixel (position 3) for the single span on the next scan line has been stacked. After
position 3 is processed, the filled spans and stacked positions are as shown in Fig. 3-14(c). And
Fig. 3-14(d) shows the filled pixels after processing all spans in the upper right of the specified
- 20 -

area. Position 5 is next processed, and spans are filled in the upper left of the region; then
position 4 is picked up to continue the processing for the lower scan lines.

Figure 3-14
Boundary fill across pixel spans for a 4-connected area. (a) The filled initial pixel
span, showing the position of the initial point (open circle) and the stacked positions
for pixel spans on adjacent scan lines. (b) Filled pixel span on the first scan line
above the initial scan Line and the current contents of the stack. (c) Filled pixel spans
on the first two scan lines above the initial scan line and the current contents of the
stack. (d) Completed pixel spans for the upper-right portion of the defined region and
the remaining stacked positions to be processed.

An example: Write 4_connected boundary algorithm and the order in which pixel are filled in
the following Fig. The number 1 represents a seed.
Solution: (1) Write the algorithm. (omitted).
(2) Plot the result.

- 21 -

Flood-Fill Algorithm
Sometimes we want to fill in (or recolor) an area that is not defined within a single color
boundary. Figure 3-15 shows an area bordered by several different color regions. We can paint
such areas by replacing a specified interior color instead of searching for a boundary color value.
This approach is called a flood-fill algorithm.

Figure 3-15
An area defined within
multiple color boundaries.

We start from a specified interior point (x, y) and reassign all pixel values that are currently
set to a given interior color with the desired fill color. If the area we want to paint has more than
one interior color, we can first reassign pixel values so that all interior points have the same
color. Using either a 4-connected or 8-connected approach, we then step through pixel positions
until all interior points have been repainted. The following procedure flood fills a 4-connected
region recursively, starting from the input position.

Void floodFill4 (int x, int y, int fillColor, int oldColor)


{
if (getpixel (x, y) == oldcolor)
{
setcolor (fillcolor);
setpixel (x, y);
floodFill4 (x+l, y, fillColor, oldColor);
floodFill4 (x-1, y, fillColor, oldColor);
floodFill4 (x, y+l, fillColor, oldColor);
floodFill4 (x, y-1, fillColor, oldColor);
}
}
We can modify procedure floodFill4 to reduce the storage requirements of the stack by
filling horizontal pixel spans, as discussed for the boundary-fill algorithm. In this approach, we
- 22 -

stack only the beginning positions for those pixel spans having the value oldColor. The steps in
this modified flood-fill algorithm are similar to those illustrated in Fig. 3-45 for a boundary fill.
Starting at the first position of each span, the pixel values are replaced until a value other than
oldColor is encountered.

Some examples, results and relative data are shown as followings:

The data of the face:


int triangle[4][2]={{300,275},{280,325},{320,325},{300,275}};
int fourline[5][2]={{250,340},{300,375},{350,340},{300,350},{250,340}};
dc.Arc(200,200,400,400,300,200,300,200);//head
dc.Ellipse(240,240,270,300);//eyes
dc.Ellipse(330,240,360,300);
dc.Ellipse(250,260,260,290);
dc.Ellipse(340,260,350,290);
- 23 -

dc.MoveTo(300,275);//nose
dc.LineTo(280,325);
dc.LineTo(320,325);
dc.LineTo(300,275);
dc.MoveTo(250,340);//mouth
dc.LineTo(300,375);
dc.LineTo(350,340);
dc.LineTo(300,350);
dc.LineTo(250,340);
seed(300,300,RGB(228,192,118),bv);
seed(255,270,RGB(70,70,70),bv);
seed(345,270,RGB(70,70,70),bv);
seed(300,360,RGB(247,120,98),bv);

The data of the flower:


a[16][2]={{253,70},{260,20},{340,20},{347,70},{410,70},{435,150},{376,180},{410,230},{3
45,280},{300,250},{255,280},{190,230},{224,180},{165,150},{190,70},{253,70}};
b[13][2]={{300,360},{330,320},{370,300},{420,300},{380,370},{330,380},{300,370},{270,35
0},{230,330},{170,340},{200,400},{270,400},{300,380}}
seed(300,150,RGB(255,255,0),RGB(255,255,0));
seed(310,350,RGB(10,200,10),bv);
seed(290,380,RGB(10,200,10),bv);

- 24 -

You might also like