Professional Documents
Culture Documents
Syllabus
S.Y.B.Sc. (IT)
SEMESTER - III, PAPER - II
COMPUTER GRAPHIC
Unit I
Introduction Computer Graphics and Primitive Algorithms:
Introduction to Image and Objects, Image Representation, Basic
Graphics Pipeline, Bitmap and Vector-Based Graphics,
Applications of Computer Graphics, Display Devices, Cathode Ray
Tubes, Raster-Scan Display, Random-Scan Display, Flat Panel
Display, Input Technology, Coordinate System Overview,
Unit II
Two Dimensional Transformation: Introduction to
transformations, Transformation Matrix, Types of Transformations
in Two-Dimensional Graphics: Identity Transformation, Scaling,
Reflection, Shear Transformations, Rotation, Translation, Rotation
about an Arbitrary Point, Combined Transformation, Homogeneous
Coordinates, 2D Transformations using Homogeneous Coordinates
Unit III
Three-dimensional transformations, Objects in Homogeneous
Coordinates; Three-Dimensional Transformations: Scaling,
Translation, Rotation, Shear Transformations, Reflection, World
Coordinates and Viewing Coordinates, Projection, Parallel
Projection, Perspective Projection.
Unit IV
Viewing and Solid Area Scan-Conversion : Introduction to viewing
and clipping, viewing Transformation in Two Dimensions,
Introduction to Clipping, Two-Dimensional Clipping, Point Clipping,
Line Clipping, Introduction to a Polygon Clipping, Viewing and
Clipping in Three Dimensions, Three-Dimensional Viewing
Transformations, Text Clipping
Unit V
Introduction to curves, Curve Continuity, Conic Curves, Piecewise
Curve Design, Parametric Curve Design, Spline Curve
Representation, Bezier Curves, B-Spline Curves, Fractals and its
applications.
Unit VI
Object Rendering : Introduction Object-Rendering, Light Modeling
Techniques, illumination Model, Shading, Flat Shading, Polygon
Mesh Shading, Gaurand Shading Model, Phong Shading,
Transparency Effect, Shadows, Texture and Object
Representation, Ray Tracing, Ray Casting, Radiosity, Color
Models.
Books :
Computer Graphics, R. K. Maurya, John Wiley.
Mathematical elements of Computer Graphics, David F. Rogers, J.
Alan Adams, Tata McGraw-Hill.
Procedural elements of Computer Graphics, David F. Rogers, Tata
McGraw-Hill.
Reference:
Computer Graphics, Donald Hearn and M. Pauline Baker, Prentice
Hall of India.
Computer Graphics, Steven Harrington, McGraw-Hill.
Computer Graphics Principles and Practice, J.D. Foley, A Van
Dam, S. K. Feiner and R. L. Phillips, Addision Wesley.
Principles of Interactive Computer Graphics, William M. Newman,
Robert F. Sproull, Tata McGraw-Hill.
Introduction to Computer Graphics, J.D. Foley, A. Van Dam, S. K.
Feiner, J.F. Hughes and R.L. Phillips, Addision Wesley.
3
Practical (Suggested) :
Should contain at least 10 programs developed using C++. Some
sample practical are listed below.
1. Write a program with menu option to input the line coordinates
from the user to generate a line using Bresenham’s method
and DDA algorithm. Compare the lines for their values on the
line.
2. Develop a program to generate a complete circle based on.
a) Bresenham’s circle algorithm
b) Midpoint Circle Algorithm
3. Implement the Bresenham’s / DDA algorithm for drawing line
(programmer is expected to shift the origin to the center of the
screen and divide the screen into required quadrants)
4. Write a program to implement a stretch band effect. (A user
will click on the screen and drag the mouse / arrow keys over
the screen coordinates. The line should be updated like
rubber-band and on the right-click gets fixed).
5. Write program to perform the following 2D and 3D
transformations on the given input figure
a) Rotate through
b) Reflection
c) Scaling
d) Translation
6. Write a program to demonstrate shear transformation in
different directions on a unit square situated at the origin.
7. Develop a program to clip a line using Cohen-Sutherland line
clipping algorithm between X1, Y1 X 2 , Y2 against a window
X min , Ymin X max , Ymax .
8. Write a program to implement polygon filling.
9. Write a program to generate a 2D/3D fractal figures (Sierpinski
triangle, Cantor set, tree etc).
10. Write a program to draw Bezier and B-Spline Curves with
interactive user inputs for control polygon defining the shape
of the curve.
11. Write a program to demonstrate 2D animation such as clock
simulation or rising sun.
12. Write a program to implement the bouncing ball inside a
defined rectangular window.
4
1
COMPUTER GRAPHICS -
FUNDAMENTALS
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 Introduction to Image and Objects
1.3 Image Representation
1.4 Basic Graphics Pipeline
1.5 Bitmap and Vector-Based Graphics
1.6 Applications of Computer Graphics
1.7 Display Devices
1.7.1 Cathode Ray Tubes
1.7.2 Raster-Scan Display
1.7.3 Random-Scan Display
1.7.4 Flat Panel Display
1.8 Input Technology
1.9 Coordinate System Overview
1.10 Let us sum up
1.4 References and Suggested Reading
1.5 Exercise
1.0OBJECTIVES
1.1 INTRODUCTION
Representation Transform to
of 3D world physical device
objects coordinates
Bitmap graphics:
It is pixel based graphics.
The position and color information about the image are
stored in pixels arranged in grid pattern.
The Image size is determined on the basis of image
resolution.
These images cannot be scaled easily.
Bitmap images are used to represent photorealistic images
which involve complex color variations.
Figure 1.3
(a) An arrow image (b) magnified arrow image with pixel grid
Vector graphics:
The images in vector graphics are basically mathematically
based images.
Vector based images have smooth edges and therefore used to
create curves and shapes.
9
Figure 1.4
(a) A rose image (b) vector description of leaf of rose
The above figure shows a bitmap and vector image of the letter A.
Plasma display:
1.8.1Touch Screens
A touch screen device allows a user to operate a touch
sensitive device by simply touching the display screen. The input
can be given by a finger or passive objects like stylus. There are
three components of a touch screen device: a touch sensor, a
controller and a software driver.
1.8.2Light pen
A light pen is pen shaped pointing device which is connected
to a visual display unit. It has light sensitive tip which detects the
light from the screen when placed against it which enables a
computer to locate the position of the pen on the screen. Users can
18
point to the image displayed on the screen and also can draw any
object on the screen similar to touch screen with more accuracy.
In the above figure, there are two points (2, 3) and (3, 2) are
specified in Cartesian coordinate system
Answers: 1. Origin
2. x - coordinate, y – coordinate.
1.12 EXERCISE
22
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Scan-Conversion of a Lines
2.3 Scan- Conversion of Circle and Ellipse
2.3.1 Digital Differential Analyzer Algorithm
2.3.2 Bresenham's Line-Drawing Algorithm
2.4 Drawing Ellipses and Other Conics
2.4.1 Bresenham's Method of Circle Drawing
2.4.2 Midpoint Circle Algorithm
2.5 Drawing Ellipses and Other Conics
2.6 Let us sum up
2.7 References and Suggested Reading
2.8 Exercise
2.0 OBJECTIVES
2.1 INTRODUCTION
The following shows line drawn between points (x1, y1) and (x2, y2).
For slope greater than 1 (|m| > 1), the roles of y and x are
reversed, i.e., y is sampled at unit intervals (∆y = 1) and
corresponding x values are calculated as
IV. Perform the following test for each xk, starting at k = 0 if pk<
0, then next plotting point is (xk+1, yk) and
}
Call Draw_Circle (Xc, Yc, X, Y);
}
}
Draw_Circle (Xc, Yc, X, Y)
{
Call PutPixel (Xc + X, Yc, +Y);
Call PutPixel (Xc – X, Yc, +Y);
Call PutPixel (Xc + X, Yc, – Y);
Call PutPixel (Xc – X, Yc, – Y);
Call PutPixel (Xc + Y, Yc, + X);
Call PutPixel (Xc – Y ,Yc, – X);
Call PutPixel (Xc + Y, Yc, – X);
Call PutPixel (Xc – Y, Yc, – X);
}
Now the DDA algorithm for circle can be applied to draw the ellipse.
Similarly a conic can be defined by the equation
2.7 EXERCISE
31
3
TWO DIMENSIONAL
TRANSFORMATIONS I
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Introduction to transformations
3.3 Transformation Matrix
3.4 Types of Transformations in Two-Dimensional Graphics
3.5 Identity Transformation
3.6 Scaling
3.7 Reflection
3.8 Shear Transformations
3.9 Let us sum up
3.10 References and Suggested Reading
3.11 Exercise
3.0 OBJECTIVES
3.1 INTRODUCTION
2.
= [x y]
=[x y]
We can see that on applying identity transformation we
obtain the same points. Here the identity transformation is
[T] =
The identity transformation matrix is basically anxn matrix
with ones on the main diagonal and zeros for other values.
Answers: 1. Itself
2. ones, zeros.
3.6 SCALING
Answers: 1. Size.
2. uniform.
3.7REFLECTION
Answers: 1. Mirror
2. 180.
36
producing transformations
x' = x + shx.y, y' = y
whereshx is a shear parameter which can take any real number
value. The figure below demonstrates this transformation
3.11 EXERCISE
38
TWO DIMENSIONAL
TRANSFORMATIONS II
Unit Structure
4.0 Objectives
4.1 Introduction
4.2 Rotation
4.3 Translation
4.4 Rotation about an Arbitrary Point
4.5 Combined Transformation
4.6 Homogeneous Coordinates
4.7 2D Transformations using Homogeneous Coordinates
4.8 Let us sum up
4.9 References and Suggested Reading
4.10 Exercise
4.0 OBJECTIVES
4.1 INTRODUCTION
4.2 ROTATION
Rotation about origin: The pivot point here is the origin. We can
obtain transformation equations for rotating a point (x, y) through an
angleθ to obtain final point as (x’, y’) with the help of figure as
(x ¢, y¢)
q
r
(x, y)
r
1. Find the new equation of line in new coordinates (x’, y’) resulting
from rotation of 900. [use line equation y = mx + c].
4.3 TRANSLATION
10
0 5 10 15 20 x
(a)
y
10
0 5 10 15 20 x
(b)
Answer:
1. (15, 30), (15, 20), and (25, 20)
(x ¢, y¢)
(x, y)
r r
q
(x1 , y1 ) q
Now the second step is to rotate the object. Let A’’B’’C’’ be new
coordinates after applying rotation operation to A’B’C’, then
4.6HOMOGENOUS COORDINATES
(x/w, y/w, 1) as is (ax; ay; aw) where a can be any real number.
Points with w=0 are called points at infinity, and are not frequently
used.
For rotation
For scaling
Answers: 1.
2.
4.10 EXERCISE
12. Find the new coordinates of the point (2, -4) after the rotation of
300.
13. Rotate a triangle about the origin with vertices at original
coordinates (10, 20), (10, 10), (20, 10) by 30 degrees.
14. Show that successive rotations in two dimensions are additive.
15. Obtain a matrix for two dimensional rotation transformation by
an angle θ in clockwise direction.
16. Obtain the transformation matrix to reflect a point A (x, y) about
the line y = mx + c.
Answers: 1. (√3+2, 1-2√3)
2. (-1.34, 22.32), (3.6, 13.66), and (12.32, 18.66)
46
5
THREE DIMENSIONAL
TRANSFORMATIONS I
Unit Structure
5.0 Objectives
5.1 Introduction
5.2 Objects in Homogeneous Coordinates
5.3 Transformation Matrix
5.4 Three-Dimensional Transformations
5.5 Scaling
5.6 Translation
5.7 Rotation
5.8 Shear Transformations
5.9 Reflection
5.10 Let us sum up
5.11 References and Suggested Reading
5.12 Exercise
5.0 OBJECTIVES
5.1 INTRODUCTION
x ' a xx a xy a xz bx x
y ' a yx a yy a yz b y y
z' a a zy a zz b z z
zx
w 0 1 1
0 0
48
Object transformation:
Objects can be transformed using 2D and 3D transformation
techniques
5.4 SCALING
Answer: 2.
5.5 TRANSLATION
or P' = T . P
Answer: 2.
50
5.6 ROTATION
The above figure illustrates the rotation about the three axes.
Answer: 2.
Answer: 2.
52
5.8 REFLECTION
5.11 EXERCISE
Answers: 1.
2.
3.
4.
54
6
THREE DIMENSIONAL
TRANSFORMATIONS II
Unit Structure
6.0 Objectives
6.1 Introduction
6.2 World Coordinates and Viewing Coordinates
6.3 Projection
6.4 Parallel Projection
6.5 Perspective Projection
6.6 Let us sum up
6.7 References and Suggested Reading
6.8 Exercise
6.0 OBJECTIVES
6.1 INTRODUCTION
Answers: 1. window
6.3 PROJECTION
o Dimetric : | dx | = | dy |
o Trimetric : | dx | ≠ | dy | ≠ | dz |
o One-point:
One principle axis cut by projection plane
One axis vanishing point
o Two-point:
Two principle axes cut by projection plane
Two axis vanishing points
o Three-point:
Three principle axes cut by projection plane
Three axis vanishing points
6.8 EXERCISE
7
VIEWING AND SOLID AREA SCAN-
CONVERSION
Unit Structure:
7.0 Objectives
7.1 Introduction to viewing and clipping
7.2 Viewing Transformation in Two Dimensions
7.3 Introduction to Clipping:
7.3.1 Point Clipping
7.3.2 Line Clipping
7.4 Introduction to a Polygon Clipping
7.5 Viewing and Clipping in Three Dimensions
7.6 Three-Dimensional Viewing Transformations
7.7 Text Clipping
7.8 Let us sum up
7.9 References and Suggested Reading
7.10 Exercise
7.0 OBJECTIVES
Viewing Pipeline
Displayed
MC
PDCS
Ready to display
objects
WCS DC
Pictures Mapping Fitting
VCS nVCS
2. Define window.
64
The process which divides the given picture into two parts :
visible and Invisible and allows to discard the invisible part is known
as clipping. For clipping we need reference window called as
clipping window.
(Xmax,Ymax)
(Xmin, Ymin)
Discard the points which lie outside the boundary of the clipping
window. Where,
Xmin ≤ X ≤ Xmax and
Ymin ≤ Y ≤Ymax
65
Fig. 7.4
Discard the part of lines which lie outside the boundary of the
window.
We require:
1. To identify the point of intersection of the line and window.
2. The portion in which it is to be clipped.
The lines are divided into three categories.
a) Invisible
b) Visible
c) Partially Visible [Clipping Candidates]
Fig. 7.5
Where, Bits take the volume either 0 or 1 and
Here, we divide the area containing the window as follows. Where,
the coding is like this, Bit value = 1 if point lies outside the boundary
OR
Fig. 7.7
67
Polygon Clipping
Algorithm:
1st possibility:
If the 1st vertex of an edge lies outside the window boundary
and the 2nd vertex lies inside the window boundary.
68
V1
V1’ V2
Fig. 7.8
2nd possibility:
If both the vertices of an edge are inside of the window
boundary only the second vertex is added to the vertex list
Fig. 7.9
3rd possibility:
If the 1st vertex is inside the window and 2nd vertex is outside
only the intersection point is add to the output vertex list.
V1
V2’ V2 v3’ v4
V3
Fig. 7.10
4th possibility:
If both vertices are outside the window nothing is added to
the vertex list.
of culling objects based on their depths and clipping those that fell
between the two planes, it would be no problem. However, the
complexity arises when objects cross the boundaries of the near
and far planes similar to when objects cross the edges of the
windows. The objects need to be “clipped” to the far and near
planes as well as to the edges of the window.
3D Viewing Transformation :
Look Point the point that the eye is looking at View Distance
the distance that the window is from the eye Window Size the
height and width of the window in world space coordinates Up
Vector which direction represents “up” to the viewer, this parameter
is sometimes specified as an angle of rotation about the viewing
axis These parameters are illustrated in Figure.
71
Fig.7.15
The Eye Point to the Look Point forms a viewing vector that
is perpendicular to the viewing window. If you want to define a
window that is not perpendicular to the viewing axis, additional
parameters need to be specified. The Viewing Distance specifies
how far the window is from the viewer. Note from the reading on
projections, that this distance will affect the perspective calculation.
The window size is straightforward. The Up Vector determines the
rotation of the window about the viewing vector. From the viewer’s
point of view, the window is the screen. To draw points at their
proper position on the screen, we need to define a transformation
that converts points defined in world space to points defined in
screen space. This transformation is the same as the
transformation that positions the window so that it lies on the XY
plane centered about the origin of world space.
Fig. 7.16
Fig. 7.17
Fig. 7.18
74
7.10 EXERCISE
8
INTRODUCTION TO SOLID AREA SCAN-
CONVERSION
Unit Structure:
8.0 Objectives
8.1 Introduction
8.2 Inside–Outside Test
8.3 Winding Number Method and Coherence Property
8.4 Polygon Filling and Seed Fill Algorithm
8.5 Scan-Line Algorithm
8.6 Priority Algorithm
8.7 Scan Conversion of Character
8.8 Aliasing, Anti-Aliasing, Half toning
8.9 Thresholding and Dithering
8.10 Let us sum up
8.11 References and Suggested Reading
8.12 Exercise
8.0 OBJECTIVE
The objective of this chapter is
To understand polygon filling techniques and algorithms.
To understand scan conversion of characters.
To understand concepts of anti-alising, half toning, thresholding
and diathering.
8.1 INTRODUCTION
1. Draw any point outside the range Xmin and Xmax and Ymin and
Ymax. Draw a scanline through P upto a point A under study
(Xmax, Ymax)
A P
(Xmin, Ymin)
Fig. 8.1
Steps:
1. Take a point A within the range (0,0) to ( Xmax, Ymax ) Joint it
to any point Q outside this range.
(0,0)
Seed Fill
4 Neighbors are:
N4={ (X+1,Y), (X-1,Y),
(X,Y+1), (X,Y-1) }
Steps :
1. Read n
2. Read (xi,yi) for all i=1,2,3……n
3. Read edges and store it in the array E which will be sorted
accordingly to y axies.
4. Xmin=a; xmax=b; ymin=c; ymax=d
5. Take intersection
6. Take the scanline y=c and scan from x=a to x=b
7. Find the intersecting edges of E with y=c by comparing the y
coordinate of the end points with y=c
8. Activate those edges
9. Scan through the line y=c and compute the next x position by
appling the formulation
Xk+1= xk +1/m
Check whether the point (Xk+1,, Yk) is inside or outside the
polygon, by inside outside procedure. If the point (Xk+1,, Yk) is
inside , paint it.
10. Repeat the procedure from Yc to Yd i.e. y=c to y=d.
80
Scan line
Fig. 8.4
Here, all the pixels inside the polygon can be painted without
leaving any of the neighboring pixels. If the point of intersection of
an edge and the scanline is a vertex, we shorten one of the edges.
So that it will be contributed to the intersection is 1. If the endpoint
of the two edges are on one side of the scan line and the
contribution will be 2 if the other points are on the opposite side of
the scanline.
The scan line filling algorithm can be applied for the curve
closed boundary as follows:
Meanings:
Glyph: In information technology, a glyph (pronounced GLIPH;
from a Greek word meaning carving) is a graphic symbol that
provides the appearance or form a character. A glyph can be an
alphabetic or numeric font or some other symbol that pictures an
encoded character.
Contour: A line drawn on a map connecting points of equal
height or an outline especially of curving or irregular figure:
SHAPE
Character fonts, such as letters and digits, are the building
blocks of textural content of an image presented in variety of styles
and attributes. Character fonts on raster scanned display devices
are usually represented by arrays of bits that are displayed as a
matrix of black and white dots. Value for Black - 0 and White - 1.
3. Filling: Using the sorted intersections, runs of pixels are set for
each scan line of the bitmap from top to bottom.
Aliasing:
Aliasing is the distortion of information due to low- frequency
sampling. Low- frequency sampling results in highly periodic
images being rendered incorrectly. For example, a fence or building
might appear as a few broad stripes rather than many individual
smaller stripes.
Anti-Aliasing:
Anti-aliasing is the process of blurring sharp edges in
pictures to get rid of the jagged edges on lines. After an image is
rendered, some applications automatically anti-alias images. The
program looks for edges in an image, and then blurs adjacent
pixels to produce a smoother edge. In order to anti-alias an image
when rendering, the computer has to take samples smaller than a
pixel in order to figure out exactly where to blur and where not to.
Fig. 8.5
Half Toning :
Many hardcopy devices are bi-level: they produce just two
intensity levels. Then to expand the range of available intensities
there is Halftoning or clustered-dot ordered dither
83
Fig. 8.6
1 0 7 0 0 7
4 6 2 0 7 0
7 5 3 7 7 0
84
The scan line filling algorithm can be applied for the curve
closed boundary
8.11 EXERCISE
86
9
INTRODUCTION TO CURVES
Unit Structure:
9.0 Objective
9.1 Introduction
9.2 Curve Continuity
9.3 Conic Curves
9.4 Piecewise Curve Design
9.5 Parametric Curve Design
9.6 Spline Curve Representation
9.7 Bezier Curves
9.8 B-Spline Curves
9.9 Difference between Bezier Curves and B-Spline Curves
9.10 Fractals and its applications.
9.11 Let us sum up
9.12 References and Suggested Reading
9.13 Exercise
9.0 OBJECTIVE
9.1 INTRODUCTION
f x, y Ax 2 Bxy Cy 2 Dx Ey F
f x, y
2 Ax By D
x
f x, y
Bx 2Cy E
x
Fig. 9.4
2. To make curves with more than order control points, you can
join two or more curve segments into a _________________.
Fig. 9.7
91
Bezier curve
Bezier curve section can be fitted to any number of control
points. The number of control points to be approximated and their
relative position determine the degree of the Bezier polynomial. A
Bezier curve can be specified with boundary conditions, with
blending function. Suppose we are given n+1 control point
positions: Pk =(Xk,Yk,Zk) with k varing from 0 to n. these
coordinate points can be blended to produce the following position
vector P(u) , which describes the path of an approximating Bezier
polynomial function between P0 and Pn.
P(u)= ∑nk=0 Pk BEZk ,n(u) ……….. 0≤u≤1.
Why use?
1. Easy to implement
2. Reasonably powerful in curve design.
3. Efficient methods for determining coordinate positions along a
Bezier curve can be set up using recursive calculations.
C(n,k)=((n-k+1)/k ) C (n,k-1) ………… n≥k
.
Properties:
1. Bezier curves are always passes through the first and last
control points.
2. The slop at the beginning of the curve is along the line joining
the first two control points and the slop at the end of the curve is
along the line joining the last two end points.
3. It lies within the convex hull of the control points.
At u=0 and u=1 only non zero blending function is BEZ 0,3
and BEZ3,3 respectively. Thus, the cubic curve will always pass
through control points P0 and P3
Table 9.1
Introduction:
Nature:
Fractals have become immensely popular for describing and
creatin natural objects like mountains, clouds, flames, etc. that
cannot be described in terms of mathematical geometry using
triangles or squares. In Computer Graphics, modeling techniques
generally assume that an object is a collection of lines or polygons
or that it can be described by higher order polynomials e.g. Bezier
or B-Spline curves. While these techniques efficiently model solid
objects like cars, roads, houses etc. they are not well adapted to
representation of natural object features like terrains, snow, smoke,
etc.
Bacteria Cultures:
Some of the most amazing applications of fractals can be
found in such distant areas as the shapes of bacteria cultures. A
bacteria culture is all bacteria that originated from a single ancestor
and are living in the same place. When a culture is growing, it
spreads outwards in different directions from the place where the
original organism was placed. Just like plants the spreading
bacteria can branch and form patterns which turn out to be fractal.
The spreading of bacteria can be modeled by fractals such as the
diffusion fractals, because bacteria spread similarly to nonliving
materials.
95
Biological systems:
Fractal and chaos phenomena specific to non-linear systems
are widely observed in biological systems. A study has been
established an analytical method based on fractals and chaos
theory for two patterns: the dendrite pattern of cells during
development in the cerebellum and the firing pattern of intercellular
potential. Variation in the development of the dendrite stage was
evaluated with fractal dimension, enabling the high order seen
there to be quantized
Origin of Fractals:
The fact that any small part of the coast will look similar to
the whole thing was first noted by Benoit Mandelbrot. He called
shapes like this fractals. In nature, you can find them everywhere.
Any tree branch, when magnified, looks like the entire tree. Any
rock from a mountain looks like the entire mountain. The theory of
fractals was first developed to study nature. Now it is used in a
variety of other applications. And, of course, beauty is what makes
them popular! And now fractal geometry is providing us with a new
perspective to view the world, creating real life landscapes, to data
compression, music etc.
Fig. 9.9
Classification of fractals:
Now, take one of these lines and plot locations at smaller intervals
of time. It is observed that a smaller fragmented line made up of
randomly located parts exists. If one of these lines is taken, it is
found that it is made up of smaller lines as well. However, this self-
similarity is different. Although each line is composed of smaller
lines, the lines are random instead of being fixed.
Square 2 4 = 22
Cube 3 8 = 23
Any
D P = 2D
self-similar figure
Sierpinski triangle 1.58 3 = 2D
Table 9.2
Fractal Curves :
Koch curve
A variant of the Koch curve which uses only right-angles.
Variables: F
Constants: + −
Start: F
Rules: (F → F+F−F−F+F)
n = 2:
F+F-F-F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F
Fig 9.13 n = 3:
F+F-F-F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F+ F+F-
F-F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F- F+F-F-
F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F- F+F-F-
F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F+ F+F-F-
F+F+F+F-F-F+F-F+F-F-F+F-F+F-F-F+F+F+F-F-F+F
Sierpinski triangle
Dragon curve
The Dragon curve drawn using an L-system.
Variables: X Y F
Constants: + −
Start: FX
Rules: (X → X+YF+),(Y → -FX-Y)
Angle: 90º
Fractal plant
Variables: X F
Constants: + −
Start: X
Rules: (X → F-[[X] +X] +F [+FX]-X), (F → FF)
Angle: 25º
104
Bezier curves do not allow for local control of the curve shape. If
we reposition any one of the control points, the entire curve will
be affected.
9.12 EXERCISE
106
10
SURFACE DESIGN AND VISIBLE
SURFACES
Unit Structure:
10.0 Objectives
10.1 Introduction
10.2 Types of surfaces
10.2.1 Bilinear Surfaces
10.2.2 Ruled Surfaces
10.2.3 Developable Surfaces
10.2.4 Coons Patch
10.2.5 Sweep Surfaces
10.2.6 Surface of Revolution
10.2.7 Quadric Surfaces
10.3 Constructive Solid Geometry
10.3.1 Bezier Surfaces
10.3.2 BSpline Surfaces
10.3.3 Subdivision Surfaces
10.4 Introduction to visible and hidden surfaces
10.5 Coherence for visibility
10.6 Extents and Bounding Volumes
10.7 Back FaceCulling
10.8 Painter’s Algorithm
10.9 Z-Buffer Algorithm
10.10 Floating Horizon Algorithm
10.11 Roberts Algorithm
10.12 Let us sum up
10.13 References and Suggested Reading
10.14 Exercise
10.0 OBJECTIVE
10.1 INTRODUCTION
Bilinear Surfaces
Ruled Surfaces
Developable surfaces
Coon patch
Sweep surfaces
Surface of revolution
Quadratic surfaces
Fig. 10.1
108
Fig. 10.4
110
B) Rotational sweep
In rotational sweep 2 D shape or curve is rotated about the
axis of rotation to produce 3D object. In general for sweep
representation we are allowed to use any path curve and 2D shape.
For rotation sweep are allow to rotate 00 to 3600 . Where the
translation sweep and rotational sweep is combined to get a 3D
object. The corresponding sweep representation is called as
general sweep.
f(x; y; z) j x2 + y2 = 1g Cylinder
f(x; y; z) j x2 + y2 + z2 = 1g Sphere
f(x; y; z) j x2 + 2xy + y2 + z2 � 2z = 5g ??
Fig. 10.5
Fig. 10.6
uk
the Bezier functions is BEZ k ,n u C n, k u k 1 u . Where
Cn, k represents the binary coefficients. When u=0, the function
is one for k=0 and zero for all other points. When we combine two
orthogonal parameters, we find a Bezier curve along each edge of
the surface, as defined by the points along that edge. Bezier
surfaces are useful for interactive design and were first applied to
car body design.
Types of coherence:
volume for the single object consisting of their union, and the other
way around. Therefore it is possible to confine the description to the
case of a single object, which is assumed to be non-empty and
bounded (finite). Bounding volumes are most often used to
accelerate certain kinds of tests. In ray tracing, bounding volumes
are used in ray-intersection tests, and in many rendering
algorithms, they are used for viewing frustum tests. If the ray or
viewing frustum does not intersect the bounding volume, it cannot
intersect the object contained in the volume. These intersection
tests produce a list of objects that must be displayed. Here,
displayed means rendered or rasterized. In collision detection,
when two bounding volumes do not intersect, then the contained
objects cannot collide, either.
True or False.
1. In ray tracing, bounding volumes are used in ray-intersection
tests.
2. Coherence is the result of global similarity.
Fig. 10.7
Back-Face Culling in
VCS
A first attempt at
performing back-face
culling might directly use
the z-component of the
surface normal, as
expressed in VCS. This
does not always work,
however a better strategy
is to construct the plane
equation for the polygon
and to test whether the
eye-point falls above or Fig. 10.8
below this plane.
Plane(Peye)<0 implies
the eyepoint is below the
plane containing the
polygon and that the
polygon should thus be
culled
Computing Surface
Normals
In order to do the face
culling, we need a surface
normal.
Method 1
Use the cross-product of
two polygon edges. The
order in which vertices are
stored should be
consistent. For example, if
polygon vertices are stored
in CCW order when viewed
from above the `front face',
then we could use
N = ( P2 - P1 ) x ( P3 - P2
) Fig. 10.9
Method 2
A more robust method is to
use the projected area onto
the yz, xz, and yz planes.
To see that areas can be
used to calculate a normal,
first consider the 2D case.
Fig. 10.10
The areas for the required
3D projections (and thus
the components of the
normal) can be calculated
as follows:
Fig. 10.11
118
Painter’s Algorithm
• Sort polygons by farthest depth.
• Check if polygon is in front of any other.
• If no, render it.
• If yes, has its order already changed backward?
– If no, render it.
– If yes, break it apart.
Which polygon is in front? Our strategy: apply a series of tests.
– First tests are cheapest
– Each test says poly1 is behind poly2, or maybe.
1. If min z of poly1 > max z poly2 ------1 in back.
2. The plane of the polygon with smaller z is closer to viewer than
other polygon.
(a,b,c,)*(x,y,z) >= d.
3. The plane of polygon with larger z is completely behind other
polygon.
4. Check whether they overlap in image
a. Use axial rectangle test.
b. Use complete test.
Disadvantages
– Have to sort first
– Need to split polygons to solve cyclic and intersecting objects
Z- Buffer Algorithms.
Here, the depth of the surface is given by the coordinates.
Algorithm compares the depth of each pixel position. We use the
normalized coordinates. So that the range of depth(z) values vary
from 0 to 1.
119
Algorithm:
1) Initialize depth (x,y)=0
Framebuffer (x,y)=I background for all (x,y)
2) Compute the z-Buffer values by using the equation of the plane.
Ax+By+Cz+D=0
[here, we store information about all the polygonal surface
included in the picture.]
The pixels are scanned by the scaline incremental method.
Z= -1/C (Ax+By+D) i.e. for any pixel position (Xk,Yk) the depth
(Xk,Yk)=Zk
Zk=-1/C(Axk+Byk+C) The next pixel position is at (Xk+1, Yk) or
(Xk,Yk-1)
Zk+1=-1/C(A(xk+1)+Byk+D)
Zk+1=-1/C(A(xk+1)+Byk+D) –A/C
Zk+1=-Zk-A/C calculates the values of depth recursively.
Zk+1=Zk+ ((A/M)+B)/C
The surface may dip below the closest horizon and the bottom
of the surface should be visible
o Keep two horizon arrays, one that floats up and one that floats
down
In image space, aliasing can occur when curves cross
previously drawn curves
Example:
o Let curves at z=0, z=1 and z=2 be defined by
20 xfor0 x20,
y
x 20for20 x40
xfor0 x20,
y
x 4020for20 x40
y= 10,respectively
Set h[0..40] = 0 and for z0,1, 2 compute the value
of y saving higher y and drawing the curves
Algorihtm:
1. { start
Eliminate the hidden plane{
2. For each volume in the scene{
3.
i) form volume matrix list
ii) compute the equation of plane for each face polygon of the
volume.
iii) Check the sign of the plane equation.
iv) find a point which lies inside the volume and find the dot
product of the equation of plane with this point. If the dot
product is less than zero change the sign of the plane
equation.
v) form the modified volume matrix.
vi) Multiply by the inverse of the viewing transformation.
vii) Compute and store the bounding box values as Xmin, Xmax,
Ymin, Ymax, Zmin, Zmax for the transformation volume.
1. Identify the hidden plane.{
i) Take the dot product of test point at infinity and transform
volume matrix.
ii) If the dot product is < 0 then the plane is the hidden plane.
iii) Eliminate entire polygon forming that plane. This eliminates
the necessity for separately identifying hidden lines.
}
2. Eliminate the line segments for each volume hidden by on other
volume in the scene.
If there is only one volume ---- STOP.
Else
122
10.14 EXERCISE
124
11
OBJECT RENDERING
Unit Structure:
11.0 Objectives
11.1 Introduction
11.2 Light Modeling Techniques
11.3 Illumination Model
11.4 Shading:
11.4.1 Flat Shading
11.4.2 Polygon Mesh Shading
11.4.3 Gaurand Shading Model
11.4.4 Phong Shading
11.5 Transparency Effect
11.6 Shadows
11.7 Texture and Object Representation
11.8 Ray Tracing
11.9 Ray Casting
11.10 Radiosity
11.11 Color Models
11.12 Let us sum up
11.13 References and Suggested Reading
11.14 Exercise
11.0 OBJECTIVE
11.1 INTRODUCTION
11.4 SHADING
Fig 11.1
127
Fig. 11.2
The complete Phong shading model for a single light source is:
ra , qa , ba rd , qd , bd max 0 n.L rs , qs , bs max 0 R.L
p
128
ra , qa , ba i Lr , Lq , L p rd , qd , bd max 0 n.Li rs , qs , bs max 0 n.Li
p
Fig. 11.3
True or False
1. Gouraud shading is a form of interpolated shading.
2. Flat shading fills a polygon with a single color computed from
more than one lighting calculation.
The fact that image files are always rectangular can present
some limitations in site design. It may be fine for pictures, but it is
less desirable for logos, or for images that gradually fade into the
background. For relatively simple web pages (such as the one that
you are reading now), this restriction is easily worked around:
simply match the background of your image to the background of
your web page. If you pick the exact same color (easiest if using
pure white), the rectangular boundary of your image will be
invisible. This simple technique has been utilized for many of the
graphics on this page. This technique is less successful if your
background is more complex, however. If you use an image as a
background, for example, you can't just match one color. And
because different web browsers have slight differences in how they
display web pages, it's basically impossible to try and match the
background of your image to the background of your web page.
129
11.6 SHADOWS
Fig 11.4
For each pixel in the display, map the 4 corners of pixel back
to the object surface (for curved surfaces, these 4 points define a
surface patch) and then map the surface patch onto the texture
map, this mapping computes the source area in the texture map.
Due to this the pixel value is modified by weighted sum of the
texture’s color.
Ray tracing follows all rays from the eye of the viewer back
to the light sources. This method is very good at simulating
specular reflections and transparency, since the rays that are
traced through the scenes can be easily bounced at mirrors and
refracted by transparent objects.
Fig. 11.5
1. A ray is traced back from the eye position, through the pixel on
the monitor, until it intersects with a surface. When the imaginary
line drawn from the eye, through a pixel, into a scene strikes a
polygon three things happen.
The basic goal of ray casting is to allow the best use of the
three-dimensional data and not attempt to impose any geometric
structure on it. It solves one of the most important limitations of
surface extraction techniques, namely the way in which they display
a projection of a thin shell in the acquisition space. Surface
extraction techniques fail to take into account that, particularly in
medical imaging, data may originate from fluid and other materials
which may be partially transparent and should be modeled as such.
Ray casting doesn't suffer from this limitation.
11.10 RADIOSITY
RGB Model:
There are many different ways of specifying color, or color
models. The most common is the RGB color model where a color is
specified in terms of red, green and blue color components. If we
use the RGB color model then the ambient color (reflectivity) of an
object is (kaR, kaG, kaB), the diffuse color (reflectivity) of an object
kd is (kdR, kdG, kdB) and the color of the light emitted from the
point light source as (LdR,LdG,LdB).
Fig. 11.6
CMY:
It is possible to achieve a large range of colors seen by humans by
combining cyan, magenta, and yellow transparent dyes/inks on a
white substrate. These are the subtractive primary colors. Often a
fourth black is added to improve reproduction of some dark colors.
This is called "CMY" or "CMYK" color space. The cyan ink absorbs
133
red light but transmits green and blue, the magenta ink absorbs
green light but transmits red and blue, and the yellow ink absorbs
blue light but transmits red and green.
Fig. 11.7
Fig. 11.8
134
11.13 EXERCISE
135
12
INTRODUCTION TO ANIMATION
Unit Structure:
12.0 Objectives
12.1 Introduction
12.2 Key-Frame Animation
12.3 Construction of an Animation Sequence
12.4 Motion Control Methods
12.5 Procedural Animation
12.6 Key-Frame Animation vs. Procedural Animation
12.7 Introduction to Morphing
12.8 Three-Dimensional Morphing
12.9 Let us sum up
12.10 References and Suggested Reading
12.11 Exercise
12.0 OBJECTIVE
12.1 INTRODUCTION
Final Animation
Fig. 12.3
Fig. 12.4
Particle systems simulates behaviors of fuzzy objects, such as
clouds, smokes, fire, and water.
Flexible dynamics simulates behaviors of flexible objects, such as
clothes. A model is built from triangles, with point masses at the
triangles’ vertices. Triangles are joined at edges with hinges; the
hinges open and close in resistance to springs holding the two
hinge halves together. Parameters are: point masses, positions,
velocities, accelerations, spring constants, wind force, etc..
(Reference: D. Haumann and R. Parent, “The behavioral test-bed:
obtaining complex behavior from simple rules,” Visual Computer,
’88.)
Rigid body dynamics simulates dynamic interaction among rigid
objects, such as rocks and metals, taking account various physical
characteristics, such as elasticity, friction, and mass, to produce
rolling, sliding, and collisions. Parameters for “classical” rigid body
dynamics are masses, positions, orientations, forces, torques,
linear and angular velocities, linear and angular momenta,
rotational ineria tensors, etc.
140
Fig. 12.5
Fluid dynamics simulates flows, waves, and turbulence of water
and other liquids.
Fur & hair dynamics generates realistic fur and hair and simulates
behaviors of fur and hair. Often it is tied into a rendering method.
2. Alife (artificial life) deals with things are virtually alive.
Fig. 12.6
Behavioral animation simulates interactions of artificial lives.
Examples: flocking, predator-prey, virtual human behaviors.
Artificial evolution is the evolution of artificial life forms. The
animator plays the role of God. As artificial life forms reproduce and
mutate over time, the survival of the fittest is prescribed by the
animator's definition of "fittest" (that is artificial 'natural' selection).
See Karl Sims's works.
Branching object generation generates plants, trees, and other
objects with branching structures and simulate their behaviors.
Without a procedural method, building a model of a branching
object, such as a tree with a number of branches, requires a lot of
time and effort. Branching object generation methods (L-
systems &BOGAS) employ user defined rules to generate such
objects.
141
Original Image
Fig. 12.7
Morphed Image
Fig. 12.8
12.11 EXERCISE