You are on page 1of 45

Illumination Model

Point light source


It is the simplest light source It requires only the position in space and intensity It radiates energy equally in all directions The intensity of the light is specified in three additive primary colors RGB

Lets say location of the light source is (Lx,Ly,Lz) Luminance value be (Ired,Igreen,Iblue) The incident light on the surface is given by Ii=Ipcos Where Ip is the intensity of the light source is the angle subtended by the incident ray and surface normal s is the vector in the direction of the light source s.n=|s||n|cos

n s P Ly Ii

Lz Lx

Directional light source/Distant Light source

These light sources are assumed to be located so far away that all of the incident light rays are parallel The intensity of the light incident upon a surface is a function of the cosine of the angle formed with surface normal and original light source intensity Diagram::

Spot light source


A directed beam with spot angle Have intensity, position, direction and angle of illumination Diagram::

Ambient Light source

Illumination schemes allow the existence of the some level of the background light level called Ambient light It typically accounts for 20-25% of the total illumination

Reflection models

Diffuse reflection Specular reflection The complete reflection expression

Diffuse reflection

Diffuse surface:

Rough surface such as carpets, textiles, and some papers exhibit diffuse reflection Diffuse surfaces reflects all the light falling on it in all the direction So this reflection is view direction independent The diffuse term Id for the light source is given by Id=IiKdcos() Kd is reflection coefficient of the surface Ii is the intensity of the light source

Ii

Kd

Specular reflection

Smooth or polished surfaces can be simulated by combining a diffuse reflection with a specular highlight Reflection of the light in the observers field of view As the visibility of the highlight depends upon the relative position of the observer to the surface R is the reflected ray L is incident ray n is surface normal V is the vector along the direction of the observer

n L

is the angle between direction of the observer and direction of reflection is the angle of incidence which is equal to angle of reflection which is also called as error angle The specular term Is is given by Is= IiKscos()^g Where Ii is the light intensity of the light source

Ks is coefficient of specular surface


g is specular reflection parameter which is determined by the type of surface

For very shiny surfaces g is larger


For dull surfaces g is smaller

The complete reflection expression

This is simple reflection model has three elements as ambient, diffuse and specular for each of which there will be red green and blue component I=Iambient+[Idiffuse+Ispecular] I=IaKa+[IiKd(L.N)+IiKscos()^g

Shading Algorithm

Flat shading Gouraud shading Smooth shading Phong shading

Flat shading

The fast and simplest method for shading is flat shading It is constant shading In this method illumination model is applied to object only once The entire polygon is then displayed with the single intensity value

Gauraud shading

It interpolates light intensities across a polygon using key values taken at its vertices

A(Xa,Ya) Eg. The following figure shows 3 vetices as A(Xa,Ya) , B(Xb,Yb) and C(Xc,Yc) which have intensities of Ia, Ib and Ic IL(XL,Ys) respectively. IR(XR,Ys) Is(Xs,Ys) The intensity at point (XL,Ys) is given by IL=Ia(XL-Xa)+Ic(XL-Xc)/(Xc-Xa) C(Xc,Yc)

Repeat the same for IR and so on

B(Xb,Yb)

What is the intensity at point Is(Xs,Ys)??

The intensity Is at a point (Xs,Ys) on the raster Ys is given by expression Is=IL(XR-Xs)+IR(Xs-XL)/(XR-XL)

Smooth shading

The flat shading creates a faceted (monotonous) view of the object If the normals used in the illumination calculations are replaced by averaged normals then that results in smooth shading The nature of these average normals can then be understood by following diagram

Average normal

Smooth Shading

We could interpolate the colours across the vertices to smooth out the effect but what is the mathematically correct normal at a vertex ? an approximation is the normalized average of adjacent faces

Approximate normal = sum of normals / number of normals

n1 n 2 n 3 n 4 n n1 n 2 n 3 n 4

Phong shading

This is found by linear interpolation algorithm by interpolating surface normals rather than light intensities

Eg. The following figure shows 3 vetices as A(Xa,Ya) , B(Xb,Yb) and C(Xc,Yc) which have normals as Na, Nb and Nc respectively. The normal at point (XL,Xs) is given as NL=Na(Ys-Yc)+Nc(Ya-Yc)/(Ya-Yc) The normal at point (XR,Ys) is given as NR=Na(Ys-Yb)+Nb(Ya-Ys)/(Ya-Yb) The normal Is at a point (Xs,Ys) on the raster Ys is given by expression Ns=NL(XR-Xs)+NR(Xs-XL)/(XR-XL) Once the normals are found then the surface intensity at that point is determined by applying illumination model

A(Xa,Ya)

NL(XL,Ys)

NR(XR,Ys)

Is(Xs,Ys)
C(Xc,Yc) B(Xb,Yb)

Hidden surface removal

When objects are very close it is easy to distinguish between two objects but when far away the z coordinate of object become similar
Hidden surface

Hidden surface removal

In given set of 3D objects and viewing specification we wish to determine which lines or surfaces of the objects are visible so that we can display only those visible lines or surfaces This process is called as hidden surface elimination or visible surface determination For this process some algorithms are used These algorithms are classified into 2 categories:

Object space methods Image space methods

Object space methods


It is implemented in physical coordinate system in which object is described Compares the object and parts of the object to each other within the scene definition to determine which surfaces as a whole are visible It is implemented in the scene coordinate system in which object is viewed Visibility is decided point by point at each pixel position on the view plane Most of the algorithms fall in this category

Image space methods


Techniques for efficient visible surface algorithm

Coherence It means similarity In this technique coherence is checked between the projected object (image) and the real world object Coherence is checked in terms of color, depth, texture, and so on

Backface Culling/Backface Removal

Backface Culling

A simple way to perform hidden surface is to remove all ``backfacing'' polygons. The observation is that if polygon normal is facing away from the viewer then it is ``backfacing.'' For solid objects, this means the polygon will not be seen by the viewer.

Thus, if N.V>0 , then cull polygon. Note that V is vector from eye to point on polygon

Backface Culling Not a complete solution


If objects not convex, need to do more work. If polygons two sided (i.e., they do not enclose a volume) then we can't use it. A HUGE speed advantage if we can use it since the test is cheap and we expect at least half the polygons will be discarded. Usually performed in conjunction with a more complete hidden surface algorithm. Easy to integrate into hardware (and usually improves performance by a factor of 2).

Hidden surface removal Algorithms

The hidden surface removal techniques have been developed that take into account the partial intersection of objects and transparent objects The painters algorithm The scan line algorithm The z-buffer algorithm

The painters algorithm


It sorts the surfaces within the VOs field of view in depth sequence Once the list is eshtablished it renders the surfaces starting with the most distant and finishing with most nearest This ensures that most nearest masks most distant Disadvantage: it leads to aliasing Aliasing: If the rasterize locations of the pixel do not match with the true line and if we have to select the optimum raster locations to represent a straight line This leads to stair step effect which is known as aliasing

The algorithm gets its name from the manner in which an oil painting is created The artist begins with background and then add most distant object and then most nearest There is no need to erase portions of the background the artist simply paints on the top of them

Idea: Draw polygons as an oil painter might: The farthest one first. Sort polygons on farthest z Resolve ambiguities where z's overlap Scan convert from largest z to smallest z Since closest drawn last, it will be on top (and therefore it will be seen). Need all polygons at once in order to sort.

Scan Line algorithm

It renders the image on the line-by-line basis, normally starting with the topmost raster and working down to the bottom of the image. This algo examines the list of object intersected and using depth data proceeds to render the individual portions of the object

Scan Line algorithm

For scan line 1 it is intersecting the yellow surface so flag for only yellow surface is set But for scan line 2 it intersects yellow and blue surface both so flags for both surfaces are set Scan line 1 Here the depth calculations are necessary Scan line 2 For the surface more closer the flag will be kept on and for the surface more distant flag will be off and only for the on surface the information will be scanned

Hidden surface

Z buffer algorithm

The z-buffer algorithm dispenses with sorting by introducing a depth buffer that always maintains the xdepth for the nearest surface rendered into a pixel Before rendering begins the depth buffer is primed with a value equal to the depth of the viewing plane This ensures that anything beyond this plane is not visible When first object is rendered, the depth of the object is computed at every pixel affected. And this information is stored in frame buffer and also the intensity values at every pixel position are stored at frame store. When next object is rendered its depth value is compared with stored value and if depth is smaller then the buffer is overridden and so the frame store This will continue until all objects in the scene are rendered

In the diagram we have two cubes First if the object B will be rendered then the frame buffer and frame store will be updated with the pixel and intensity values. When the object A will be rendered then its depth value will be compared with the previously stored value So in this scene object B is not visible and object A is visible since the z values are compared
Z

B A X

Eye position

initialize all depth(x,y) to 0 and refresh(x,y) to background Here depth(x,y) will store the z value of the pixel and refresh(x,y) will store the intensity value at the pixel for each pixel compare evaluated depth value z to current depth(x,y) if z > depth(x,y) then depth(x,y)=z refresh(x,y) = Isurface(x,y) After processing of all the surfaces the Z-buffer i.e depth(x,y) contains depth value of visible surface and the frame buffer i.e. refresh(x,y) will contain the color intensity value of the visible surface.

The Z-buffer Algorithm


A

So for the diagram which polygon will render or will be y visible from A and B?? Since z2 is depth of B and z1 is depth of A z1>z2 Polygon B will be z visible

z2

z1

Realism

It means bringing the realism in the virtual environment Two techniques of increasing the realism the virtual scene The image content can be improved by incorporating real world textures, atmospheric effects, and shadows and complex surface forms the displayed image can be kept free of any artifacts introduced by the rendering process

Texture mapping

It enables the synthetic or real world images to be incorporated into a computer generated image

You might also like