You are on page 1of 22

Lighting and Shading

 Lighting: the process of computing the luminous intensity (i.e., outgoing light) at a particular 3-D
point, usually on a surface
 Shading: the process of assigning colors to pixels
 Illumination: the transport of energy (in particular, the luminous flux of visible light) from light
sources to surfaces & points

The light in an environment comes from a light source such as a lamp or the sun. In fact, a lamp and the
sun are examples of three essentially different kinds of light source: a point light, directional light and
Spotlights. A point light source is located at a point in 3D space, and it emits light in all directions from
that point. For a directional light, all the light comes from the same direction, so that the rays of light are
parallel. The sun is considered to be a directional light source since it is so far away that light rays from
the sun are essentially parallel when they get to the Earth.

Spotlight – Light that radiates light in a cone with more light in the center of the cone, gradually tapering
off towards the sides of the cone. The simplest spotlight would just be a point light that is restricted to a
certain angle around its primary axis of direction - Think of something like a flashlight or car headlight as
opposed to a bare bulb hanging on a wire. More advanced spotlights have a falloff function making the
light more intense at the center of the cone and softer at the edges.

In computer graphics, this approach to lighting that considers light that comes only from lights in the
scene is called the local illumination model. In the simplified model of light that we will use, there are
three fundamental components of light: the ambient, diffuse, and specular components. We can think of
each of these as follows:

Ambient: light that is present in the scene because of the overall illumination in the space. This can
include light that has bounced off objects in the space and that is thus independent of any particular light.

Diffuse: light that comes directly from a particular light source to an object, where it is then sent directly
to the viewer. Normal diffuse light comes from an object that reflects a subset of the wavelengths it
receives and that depends on the material that makes up an object, with the effect of creating the color of
the object.
Specular : light that comes directly from a particular light source to an object, where it is then reflected
directly to the viewer because the object reflects the light without interacting with it and giving the light
the color of the object. This light is generally the color of the light source, not the object that reflects the
light.
Normal Vectors

The visual effect of a light shining on a surface depends on the properties of the surface and of the light.
But it also depends to a great extent on the angle at which the light strikes the surface. The angle is
essential to specular reflection and also affects diffuse reflection. That's why a curved, lit surface looks
different at different points, even if its surface is a uniform color. To calculate this angle, OpenGL needs
to know the direction in which the surface is facing. That direction is specified by a vector that is
perpendicular to the surface. Another word for "perpendicular" is "normal," and a non-zero vector that is
perpendicular to a surface at a given point is called a normal vector to that surface. When used in lighting
calculations, a normal vector must have length equal to one. A normal vector of length one is called a unit
normal. For proper lighting calculations in OpenGL, a unit normal must be specified for each vertex.
However, given any normal vector, it is possible to calculate a unit normal from it by dividing the vector
by its length.

Since a surface can be curved, it can face different directions at different points. So, a normal vector is
associated with a particular point on a surface. In OpenGL, normal vectors are actually assigned only to
the vertices of a primitive. The normal vectors at the vertices of a primitive are used to do lighting
calculations for the entire primitive.

Note in particular that you can assign different normal vectors at each vertex of a polygon. If your real
objective is to make something that looks like a curved surface, then you want to use normal vectors that
are perpendicular to the actual surface, not to the polyhedron that approximates it. Take a look at this
example in Fig 1:

Fig 1 Fig 2

The two objects in this picture are made up of bands of rectangles. The two objects have exactly the same
geometry, yet they look quite different. This is because different normal vectors are used in each case. For
the top object, the band of rectangles is supposed to approximate a smooth surface, hence a normal vector
is specified vector at each of the vertices. For the object on the bottom, it is a band of rectangles, and the
normal vectors were actually perpendicular to the rectangles. Figure 2 shows a two-dimensional
illustration that shows the normal vectors that were used for the two pictures:
Shading
Shading is the process of computing the color for the components of a scene. It is usually done by
calculating the effect of light on each object in a scene to create an effective lighted presentation. The
shading process is thus based on the physics of light, and the most detailed kinds of shading computation
can involve deep subtleties of the behaviour of light, including the way light scatters from various kinds
of materials with various details of surface treatments. Considerable research has been done in those areas
and any genuinely realistic rendering must take a number of surface details into account.

Flat shading of a polygon presents each polygon with a single color. This effect is computed by assuming
that each polygon is strictly planar and all the points on the polygon have exactly the same kind of
lighting treatment. The term flat can be taken to mean that the color is flat. This single normal allows you
only a single lighting computation for the entire polygon, so the polygon is presented with only one color.

Smooth shading of a polygon displays the pixels in the polygon with smoothly-changing colors across
the surface of the polygon. This requires that you provide information that defines a separate color for
each vertex of your polygon, because the smooth color change is computed by interpolating the vertex
colors across the interior of the triangle with the standard kind of interpolation we saw in the graphics
pipeline discussion. The interpolation is done in screen space after the vertices’ position has been set by
the projection, so the purely linear calculations can easily be done in graphics cards. This per-vertex color
can be provided by your model directly, but it is often produced by per-vertex lighting computations. In
order to compute the color for each vertex separately you must define a separate normal vector for each
vertex of the polygon so that the lighting model will produce different colors at each vertex.

Examples of flat and smooth shading

Figure 8.5 shows see two different images of the same relatively coarsely-defined function surface with
flat shading (left) and with smooth shading (right), to illustrate the difference. Clearly the smooth-shaded
image is much cleaner, but there are still some areas where the triangles change direction very quickly
and the boundaries between the triangles still show as color variations in the smoothlyshaded image.
Smooth shading is very nice—probably nicer than flat shading in many applications—but it isn't perfect.

The computation for this smooth shading uses simple polygon interpolation in screen space. Because each
vertex has its own normal, the lighting model computes a different color for each vertex. The
interpolation then calculates colors for each pixel in the polygon that vary smoothly across the polygon
interior, providing a smooth color graduation across the polygon. This interpolation is called Gouraud
shading and is one of the standard techniques for creating images. It is quick to compute but because it
only depends on colors at the polygon vertices, it can miss lighting effects within polygons.
Each polygon surface is rendered with Gouraud shading by performing the
following calculations:
 Determine the average unit normal vector at each polygon vertex.
 Apply an illumination model to each vertex to calculate the vertex intensity.
 Linearly interpolate the vertex intensities over the surface of the polygon.
At each polygon vertex, we obtain a normal vector by averaging the surface normals of all
polygons sharing that vertex, as illustrated in Fig. 14-44. Thus, for any vertex position V, we
obtain the unit vertex normal with the calculation

Figure 14 .45 demonstrates the next step: interpolating intensities along the polygon edges. For
each scan line, the intensity at the intersection of the scan line with a polygon edge is linearly
interpolated from the intensities at the edge endpoints.
Phong Shading
A more accurate method for rendering a polygon surface is to interpolate normal vectors, and
then apply the illumination model to each surface point. This method, developed by Phong Bui
Tuong, is called Phong shading, or nonnalvector interpolation shading. It displays more realistic
highlights on a surface and greatly reduces the Mach-band effect. A polygon surface is rendered
using Phong shading by carrying out the following steps:
 Determine the average unit normal vector at each polygon vertex.
 Linearly &erpolate the vertex normals over the surface of the polygon.
 Apply an illumination model along each scan line to calculate projected pixel intensities
for the surface points.
Interpolation of surface normals along a polygon edge between two vertices is illustrated in Fig.
14.48. The normal vector N for the scan-line intersection point along the edge between vertices 1
and 2 can be obtained by vertically interpolating between edge endpoint normals:

Incremental methods are used to evaluate normals between scan lines and along each individual
scan line. At each pixel position along a scan line, the illumination model is applied to determine
the surface intensity at that point. Intensity calculations using an approximated normal vector at
each point along the scan line produce more accurate results than the direct interpolation of
intensities, as in Gouraud shading. The trade-off, however, is that Phong shading requires
considerably more calculations.

TEXTURES

In 3D graphics, the digital representation of the surface of an object. In addition to two-dimensional


qualities, such as color and brightness, a texture is also encoded with three-dimensional properties, such
as how transparent and reflective the object is.

TEXTURE MAPPING

Texture mapping is a graphic design process in which a two-dimensional (2-D) surface, called a texture
map, is "wrapped around" a three-dimensional (3-D) object. Thus, the 3-D object acquires a surface
texture similar to that of the 2-D surface. Texture mapping is the electronic equivalent of applying
wallpaper, paint, or veneer to a real object. A common method for adding detail to an object is to map
patterns onto the geometric description of the object.

A common method for adding surface detail is to map texture patterns onto the surfaces of objects. The
texture pattern may either be defined in a rectangular array or as a procedure that modifies surface
intensity values. This approach is referred to as texture mapping or pattern mapping.

Mapping a texture pattern to pixel coordinates is sometimes called texture scanning, while the map- ping
from pixel coordinates to texture space is referred to as pixel-order scanning or inverse scanning or
image-order scanning.

To simplify calculations, the mapping from texture space to object space is often specified with
parametric linear functions :
The object-to-image space mapping is accomplished with the concatenation of the viewing and projection
transformations. A disadvantage of mapping from texture space to pixel space is that a selected texture
patch usually does not match up with the pixel boundaries, thus requiring calculation of the fractional area
of pixel coverage.

Therefore, mapping from pixel space to texture space is the most commonly used texture-mapping
method. This avoids pixel- subdivision calculation. And allows antialiasing (filtering) procedures to be
easily applied

PROCEDURAL TEXTURING METHODS


Another method for adding surface texture is to use procedural definitions of the colour variations that
art! to be applied to the object in a scene. This approach avoids the transformation calculations involved
in transferring two-dimensional texture patterns to object surfaces. When values are assigned throughout a
region of three-dimensional space, the object colour variations are referred to as solid textures.

Values from texture space are transferred to object surfaces using procedural methods, since it is usually
impossible to store texture values for all points throughout a region of space.

BUMP MAPPING

Although texture mapping can be used to add fine surface detail, it is not a good method for modelling the
surface roughness that appears on objects such as or- apples, strawberries, and raisins. The illumination
detail in the texture pattern usually does not correspond to the illumination direction in the scene. A better
method for creating surface bumpiness is to apply a perturbation function to the surface normal and then
use the perturbed normal in the illumination-model calculations. This technique is called bump mapping.

If P(u, v) represents a position on a parametric surface, we can obtain the surface normal at that point with
the calculation

N = Pu x Pv

where Pu, and Pv, are the partial derivatives of P with respect to parameters u and v. To obtain a perturbed
normal, we modify the surface-position vector by adding a small perturbation function, called a
bumpfunction:

P’(u, v) = P(u, v) + b(u, v)n


FRAME MAPPING

This technique is an extension of bump mapping. In frame mapping, we perturb both the surface normal
N and a local coordinate system attached toN. The local coordinates are defined with a surface-tangent
vector T and a binormal vector B = T x N

Frame mappingis used to model anisotropic surfaces. We orient T along the "grain" of the surfaceand
apply directional perturbations, in addition to bump perturbation in the direction of N In this way ,we can
model wood-grain patterns, cross-thread patterns in cloth, and streaks in marble or similar materials. Both
bump anddirectional perturbations can be obtained with table lookups.

WHY TEXTURE MAPPING?

Instead of filling a polygon with a colour in the scan conversion process we fill the pixels of the polygon
with the pixels of the texture (texels.) Used to:

• add detail
• add 'roughness'
• add patterns
DISADVANTAGE OF TEXTURE MAPPING

In some mappings, the correspondence between the 2-D texture map and the 3-D object's surface
becomes "messy." An example is the application of a pattern of squares to the surface of a sphere.
(Discontinuity might occur)

APPLICATIONS OF TEXTURE MAPPING

• Volume and flow visualization


• Specular and Diffuse Reflections
• Add visual detail to surfaces of 3D objects
IMAGE RENDERING AND ITS TECHNIQUES

Rendering or image synthesis is the automatic process of generating a image from a 2D or 3D model
(scene file) by means of computer programs. The results of displaying such a model can be called a
render.

WHAT IS IMAGE RENDERING?

A scene file contains objects in a strictly defined language or data structure.


It mainly consists of:
1. Geometry
2. Texture
3. Lighting
4. Shading
5. Viewpoint etc
The data contained in the scene file is then passed to a rendering program to be processed and output to a
digital image or raster graphics image file.

RENDERING TECHNIQUES

Few of the most important rendering techniques include:


1. Slicing
2. Volume Rendering
3. Iso - surface Extraction
4. Ray Casting

SLICING

Let’s take three rectilinear volume data sets, and use them as three perpendicular stacks of object aligned
texture slices.
Slices are taken through the volume orthogonal to each of the principal axes and the resulting information
for each slice is represented as a 2D texture that is then pasted onto a square polygon of the same size.

Each slice represents a different view of the object as from a different perspective.

ISO – SURFACE EXTRACTION

An iso-surface is a three-dimensional analog of an isoline(contour line) . It is a surface that represents


points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space.

There are various methods for extracting iso-surfaces from volumetric data. Marching cubes or tetrahedra
or raytracing methods are mostly used. There are many specific techniques to increase speed of
computation and decrease memory requirements. Although a precision of iso-surface extraction is very
important, too, it is not mentioned usually.

VOLUME RENDERING

Volume rendering is a visualization technique that enables users to see into 3D volumetric datasets.
Volume rendering can make parts of the data transparent or opaque depending on a volume transfer
function, which maps data values to colors and opacities.

They have been developed to overcome problems of the accurate representation of surfaces in the iso-
surface techniques.

RAY CASTING

Ray casting is the use of ray–surface intersection tests to solve a variety of problems in computer graphics
and computational geometry.

Ray casting is a rendering technique used in computer graphics and computational geometry. It is capable
of creating a three-dimensional perspective in a two-dimensional map. Developed by scientists at the
Mathematical Applications Group in the 1960s, it is considered one of the most basic graphics-rendering
algorithms. Ray casting makes use of the same geometric algorithm as ray tracing.

Ray casting can refer to a variety of problems and techniques:

1. the general problem of determining the first object intersected by a ray,


2. a technique for hidden surface removal based on finding the first intersection of a ray cast
from the eye through each pixel of an image,
3. a non-recursive ray tracing rendering algorithm that only casts primary rays
Fig. RAY CASTING

RAY TRACING

Ray Tracing is a global illumination based rendering method. It traces rays of light from the eye back
through the image plane into the scene. Then the rays are tested against all objects in the scene to
determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the
background color.

Ray tracing handles shadows, multiple reflections, and texture mapping in a very easy straight-forward
manner.

In ray tracing, a ray of light is traced in a backwards direction. That is, we start from the eye or camera
and trace the ray through a pixel in the image plane into the scene and determine what it hits. The pixel is
then set to the color values returned by the ray.
RAY CASTING

Ray casting is a ray–surface intersection test, rendering technique used in computer graphics and
computational geometry. It is capable of creating a three-dimensional perspective in a two-dimensional
map.

Developed by scientists at the Mathematical Applications Group in the 1960s, it is considered one of the
most basic graphics-rendering algorithms. Ray casting makes use of the same geometric algorithm as ray
tracing.

The first ray-casting algorithm used for rendering was presented by Arthur Appel in 1968.

Ray casting can refer to a variety of problems and techniques:

• The general problem of determining the first object intersected by a ray.

• A technique for hidden surface removal based on finding the first intersection of a ray cast from
the eye through each pixel of an image.

• A non-recursive ray tracing rendering algorithm that only casts primary rays.

• A direct volume rendering method, also called volume ray casting, in which the ray is "pushed
through" the object and the 3D scalar field of interest is sampled along the ray inside the object.

Simplest case of ray tracing

Required as first step of recursive ray tracing

Basic ray-casting algorithm

 For each pixel (x,y) fire a ray from COP through (x,y)

 For each ray & object calculate closest intersection


 For closest intersection point p

 Calculate surface normal

 For each light source, calculate and add contributions

Critical operations

 Ray-surface intersections

 Illumination calculation

Ray casting is capable of transforming a limited form of data into a three-dimensional projection with the
help of tracing rays from the view point into the viewing volume.

The main principle behind ray casting is that rays can be cast and traced in groups based on certain
geometric constraints.

In ray casting, a ray from the pixel through the camera is obtained and the intersection of all objects in
the picture is computed.
Next, the pixel value from the closest intersection is obtained and is further set as the base for the
projection. Ray casting is distinct from ray tracing, with ray casting being a rendering algorithm which
would never recursively trace secondary rays, while ray tracing is capable of doing so.

Ray casting is also simple to use compared to other rendering algorithms such as ray tracing. Ray casting
is fast, as only a single computation is needed for every vertical line of the screen. Compared to ray
tracing, ray casting is faster, as it is limited by one or more geometric constraints. This is one of the
reasons why ray casting was the most popular rendering tool in early 3-D video games. However
compared to ray tracing, the images generated with ray casting are not very realistic. Due to the geometric
constraints involved in the process, not all shapes can be rendered by ray casting.
Phong Model

You might also like