You are on page 1of 6

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

A Study Paper on use of System Camera for System Mouse Control

Nitin Sakhare 1, Pratyoosh Rai 2

  • 1 Department of Computer Science and Engineering

  • 1 BHABHA ENGINEERING RESEARCH INSTITUTE, BHOPAL

    • 1 nitinsakhare@live.com

    • 2 Department of Computer Science and Engineering

  • 2 BHABHA ENGINEERING RESEARCH INSTITUTE, BHOPAL

    • 2 Pratyoosh.rai@yahoo.co.in

Abstract: Human Computer Interaction today significantly accentuates on growing more unconstrained and characteristic interfaces. The Graphical Client Interface (GUI) on Personal Computers (PCs) is calm grown, all around characterized and gives an productive interface for a client to collaborate with the PC and access the different applications easily with the assistance of mice, track cushion, and so on. In the present day situation the greater part of the cell telephones are utilizing touch screen innovation to collaborate with the client. At the same time this innovation is still not shabby to be utilized as a part of desktops and tablets As PC innovation keeps on developing, individuals have littler and littler electronic gadgets. Progressively we are perceiving the significance of human figuring communication (HCI), and specifically vision- based motion and item acknowledgment. In this paper, we propose a novel approach that uses a feature gadget to control the mouse framework (Mouse tasks).We concentrated on a few picture transforming calculations of diverse papers to actualize this work.

The PC cooperation is finished by utilizing finger, mouse cursor development with the assistance of picture acknowledgment and shading acknowledgment process. Here picture preparing is done through .net (Microsoft) for shading and picture acknowledgment process. Keeping in mind the end goal to show the capacity of the mouse development larger part shading is distinguished in a feature and a relating feature is played from the database. There are the heaps of papers accessible for this kind of study however there is not in any case one fruitful usage of this sort of framework.

Keywords: emguCV, openCV, Image filtering, convexity defects, noise filtering, human computer Interaction (HCI), image threshold, HSV Color Quantization, Tracking

__________________________________________________

*****

_________________________________________________

I.

INTRODUCTION

Numerous specialists in the human PC connection and mechanical autonomy fields have attempted to control mouse development utilizing feature gadgets. Nonetheless, every one of them utilized diverse techniques to make a clicking occasion. One methodology, by Erdem et al, utilized fingertip following to control the movement of the mouse. A tick of the mouse catch was actualized by characterizing a screen such that a tick happened when a client's hand disregarded the locale [2]. Another methodology was created by Chu-Feng Lien [3]. He utilized just the fingertips to control the mouse cursor and snap. His clicking system was in view of picture thickness, and obliged the client to hold the mouse cursor on the craved spot for a brief time of time. Paul et al, utilized still another technique to snap. They utilized the movement of the thumb (from a 'thumbs-up' position to a clench hand) to stamp a clicking occasion thumb. Development of the hand while making a unique hand sign moved the mouse pointer [4]."

"A Vision based Application for Virtual Mouse Interface Using Finger-Tip", this si anather exploration paper composed by K S Chidanand Kumar ,E&C Department, National Institute of Technology ,Surathkal, Karnataka, India. This paper exhibited a dream based virtual mouse

interface utilizing a solitary cam. A virtual board which is a rectangular board with its outskirts shaded by red shading and its four corners shaded by blue shading are extricated utilizing HSV shading quantization. In back to back edges of a feature, four corners are followed utilizing shading based histogram tracker. The four sides of rectangular board are mapped to PC screen utilizing Projective mapping. The position and bearing of fingertip on virtual board is assessed utilizing circle fitting. The framework recognizes click occasions by a V formed signal perceived utilizing Fourier descriptor and Nonlinear SVM [8].

Similarly, in this study, we are attempting to control mouse cursor development and snap occasions utilizing a cam taking into account shading recognition system. Here constant feature has been caught utilizing a Web- Camera. The client wears hued tapes to give data to the framework. Individual edges of the feature are independently transformed [5]. The preparing methods include a picture subtraction calculation to distinguish hues [7]. When the hues are distinguished the framework performs different operations to track the cursor and performs control activities, the subtle elements of which are given beneath. Human Computer Interaction [1]

IJRITCC | April 2015, Available @ http://www.ijritcc.org

2174

_______________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org with dull blue shading roundabout patches is utilized as a virtual board. Virtual board is extricated utilizing HSV shading quantization plan and four sides of virtual board are followed utilizing histogram based shading tracker. Clients can utilize at the tip of their finger as a mouse on the virtual board to reproduce a cursor for the remote showcase. Homography can outline point on the virtual board to a remote presentation. Fingertip finder identifies position and introduction of fingertip on the virtual board. Current framework actualizes squeezing/clicking occasions by holding a specific motion for some time which is perceived utilizing signal acknowledgment module[8]. Our framework and its processing are taking into account the shape of hand and fore-arm [4]. In our framework, we expect that the information shape is the form of hand and fore-arm. We are not performing any hand or arm acknowledgment. The first enter information will be the RGB picture caught by a web cam. We perform some preparatory procedure into produce a paired picture which gives enough data of hand form. The paired picture will be utilized to figure the shape and the curved structure of the form [4]. The palm position can be at first assessed by the data we extricated from raised body, the fingertips position can be distinguished from the hand shape. At that point the framework will make some adjustment of the at first estimation to make our outcomes all the more precisely. The reckoning expense of our framework is low following just processing geometry calculations are performed; and there's 2175 _______________________________________________________________________________________ " id="pdf-obj-1-10" src="pdf-obj-1-10.jpg">

Figure 1.1 System architecture overview vision based [8]

In the item following application one of the fundamental issues is article location. Rather than fingertips, a shading pointer has been utilized to make the item location simple and quick. A circle blue sticker is utilized as a shading pointer as a part of this study. To re-enact the snap occasions of the mouse three fingers with three shading pointers has been utilized.

The fundamental calculation is as per the following

[2],[3];

Set a pointer in the picture

Detect

shading data

the

pointer

utilizing

the

characterized

Define the locale and the middle of the pointer and draw a bouncing box around it

Track the movement of the pointer

Move the cursor as indicated by the position of the core of the pointer

Simulate the single and the twofold left snap and the right snap of the mouse

The vision based framework set up basically includes cam mounted on the roof behind the client and far from PC. A quadrangular white shading cardboard with its outskirt shaded by red shading and its four corners shaded

IJRITCC | April 2015, Available @ http://www.ijritcc.org

with dull blue shading roundabout patches is utilized as a virtual board. Virtual board is extricated utilizing HSV shading quantization plan and four sides of virtual board are followed utilizing histogram based shading tracker. Clients can utilize at the tip of their finger as a mouse on the virtual board to reproduce a cursor for the remote showcase. Homography can outline point on the virtual board to a remote presentation. Fingertip finder identifies position and introduction of fingertip on the virtual board. Current framework actualizes squeezing/clicking occasions by holding a specific motion for some time which is perceived utilizing signal acknowledgment module[8].

Our framework and its processing are taking into account the shape of hand and fore-arm [4]. In our framework, we expect that the information shape is the form of hand and fore-arm. We are not performing any hand or arm acknowledgment. The first enter information will be the RGB picture caught by a web cam. We perform some preparatory procedure into produce a paired picture which gives enough data of hand form. The paired picture will be utilized to figure the shape and the curved structure of the form [4].

The palm position can be at first assessed by the data we extricated from raised body, the fingertips position can be distinguished from the hand shape. At that point the framework will make some adjustment of the at first estimation to make our outcomes all the more precisely. The reckoning expense of our framework is low following just processing geometry calculations are performed; and there's

2175

_______________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

no compelling reason to apply classifier and no pretrained information base is r

II.

COMPARATIVE STUDY OF AN ALGORITHMS

In vision based framework a subjective quadrangular molded plane question, for example, bit of cardboard is utilized as a board. Every side of quadrangular formed article is going with Red shading. The four sides of quadrangular molded article are checked with roundabout formed blue shading. The extraction of virtual board is basically done utilizing HSV shading quantization plan [8] as demonstrated as follows.

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org 2176 _______________________________________________________________________________________ " id="pdf-obj-2-25" src="pdf-obj-2-25.jpg">

Figure 2.1 Huge panel quantization

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org 2176 _______________________________________________________________________________________ " id="pdf-obj-2-30" src="pdf-obj-2-30.jpg">

Figure 2.2 Huge panel quantization equation

As a result, we can quantize any color into 36 bins.The results of virtual panel extraction and its four corners are as shown in Figure 2.3 (b) & (c) below from an input image of

IJRITCC | April 2015, Available @ http://www.ijritcc.org

2176

_______________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org shading and the subsequent picture the greatest shape is separated. Presently to focus the quantity of fingers, we can utilized an extremely well known system accessible known as convexity imperfections. This strategy decides the greater part of the deformities in our shape, which lets us know the quantity of fingers in the picture. With finger number known we will performed mouse click occasions with it. For the client to snap mouse left catch, he/she opens his/her palm, when the finger tally goes more noteworthy than four mouse left catch is clicked. At long last, we need to locate the focal point of our shape, utilizing snippets of form. There is a probability amid getting the directions that of the core that there will be much variance in the focal point of form, as the hand persistently gleamed. For this issue we can partition the directions of focus by 10 to evacuate unit a piece of them as there was no 2177 _______________________________________________________________________________________ " id="pdf-obj-3-10" src="pdf-obj-3-10.jpg">

Figure 2.3 Extracted image from an input image

By Erdem , To detect the blue color of the pointer, MATLAB’s built in “imsubtract” function has been used. Imsubtract function can be used as; Z = imsubtract(X,Y) [2] Where; it subtracts every component in cluster Y from the relating component in exhibit X and returns the distinction in the comparing

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org shading and the subsequent picture the greatest shape is separated. Presently to focus the quantity of fingers, we can utilized an extremely well known system accessible known as convexity imperfections. This strategy decides the greater part of the deformities in our shape, which lets us know the quantity of fingers in the picture. With finger number known we will performed mouse click occasions with it. For the client to snap mouse left catch, he/she opens his/her palm, when the finger tally goes more noteworthy than four mouse left catch is clicked. At long last, we need to locate the focal point of our shape, utilizing snippets of form. There is a probability amid getting the directions that of the core that there will be much variance in the focal point of form, as the hand persistently gleamed. For this issue we can partition the directions of focus by 10 to evacuate unit a piece of them as there was no 2177 _______________________________________________________________________________________ " id="pdf-obj-3-23" src="pdf-obj-3-23.jpg">

component of the yield show

Z.

X

and

Y

are

genuine,

non-inadequate

numeric

clusters

of

the

same size and class, or Y is a twofold scalar. The

exhibit returned, Z, has the same size and class as X

unless

X

is

intelligent,

in

which

case

Z

is

double[2],[3].

 
International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org shading and the subsequent picture the greatest shape is separated. Presently to focus the quantity of fingers, we can utilized an extremely well known system accessible known as convexity imperfections. This strategy decides the greater part of the deformities in our shape, which lets us know the quantity of fingers in the picture. With finger number known we will performed mouse click occasions with it. For the client to snap mouse left catch, he/she opens his/her palm, when the finger tally goes more noteworthy than four mouse left catch is clicked. At long last, we need to locate the focal point of our shape, utilizing snippets of form. There is a probability amid getting the directions that of the core that there will be much variance in the focal point of form, as the hand persistently gleamed. For this issue we can partition the directions of focus by 10 to evacuate unit a piece of them as there was no 2177 _______________________________________________________________________________________ " id="pdf-obj-3-96" src="pdf-obj-3-96.jpg">

(a)

(b)

Figure 2.4. (a) Input, (b) after using imsubtract function .

Our approached framework will be partitioned into three principle parts:

Image sifting to get greatest skin shaded form.

Finding convexity deformities to check number of fingers, for mouse click occasions.

Finding

focal

point

of

palm

and

do

essential

commotion sifting to move cursor position.

As a matter of first importance we have to utilize some capacity or calculation to concentrate picture shading from picture. In speck net the YCbCr channel is give such sort of usefulness to concentrate shading from picture, so YCbCr channel is connected with obliged edges to acquire skin shaded parts of the picture. The YCbCr is a dab net in manufacture work basically used to concentrate picture

IJRITCC | April 2015, Available @ http://www.ijritcc.org

shading and the subsequent picture the greatest shape is separated.

Presently to focus the quantity of fingers, we can utilized an extremely well known system accessible known as convexity imperfections. This strategy decides the greater part of the deformities in our shape, which lets us know the quantity of fingers in the picture. With finger number known we will performed mouse click occasions with it. For the client to snap mouse left catch, he/she opens his/her palm, when the finger tally goes more noteworthy than four mouse left catch is clicked.

At long last, we need to locate the focal point of our shape, utilizing snippets of form. There is a probability amid getting the directions that of the core that there will be much variance in the focal point of form, as the hand persistently gleamed. For this issue we can partition the directions of focus by 10 to evacuate unit a piece of them as there was no

2177

_______________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

variance in the tenths of the directions. We can likewise confront another issue, the middle will move when the palm was open and close. For this issue we can put the conditions

III.

SOFTWARE

A dream based framework can be actualized in of the dialect, we can utilize spot net, vb, c, c++ and the first strategy executed in METLAB. There are a few elements of METLAB used to get yield image [2],[3],[8]

In our framework we need to utilize run time picture preparing, a DOT NET processing environment. Picture Processing is a DOT NET based programming structure utilized emguCV an

that mouse cursor position might be moved if the palm is shut i.e. finger check is zero, else mouse will remain where.

open source library, and .NET wrapper for openCV, to perform picture transforming, once the information or edge is taken from an infrared web cam. In the wake of having the picture in programming, code will discover the hand in view of skin discovery calculation. In the event that nothing is accessible then framework will be perfect yet in the event that hand part is recognized then framework will begin actualizing the signal acknowledgment calculation on the picture to perceive the motion.

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4 ISSN:http://www.ijritcc.org environment utilizing MATLAB Image Processing Toolbox. As an article a blue shading sticker is utilized to make the location simple and quick. Object identification and movement following worked exceptionally well. Utilizing the pointer moving the cursor and the reenacting the mouse click occasions likewise functioned admirably. In any case, framework has a few drawbacks, for example, being invariant to brightening up to some scale, and development of the cursor is extremely touchy to movement. As a result of this reason, to control the cursor, pointer can't be utilized reporting in real time efficiently [2],[3]. In our framework, we can have a precise palm and fingertip positions estimation in view of a hand form. Fore- arm can be incorporated to the form and the framework has a decent toleration to pivot and tilt hand. At the point when the usable picture is created, we apply Pavilidis' calculation [4] to hunt down the forms in the picture. One or more 2178 _______________________________________________________________________________________ " id="pdf-obj-4-26" src="pdf-obj-4-26.jpg">

Figure 3 The output of extracted image of an approached system.

IV.

CONCLUSION

The vision based proposed strategy displays a dream based application for virtual mouse interface utilizing fingertip. Virtual board which speaks to remote presentation is initially extricated utilizing HSV shading quantization. The four edges of virtual board maps four sides of virtual board to four sides of remote presentation. Fingertip locator module identifies fingertip position and introduction utilizing circle fitting. At last Click/Pressing occasions are reenacted by a V molded signal perceived utilizing Fourier descriptor and Support Vector Machine. Future work of this venture incorporates making the Fingertip finder module invariant to enlightenment changes and 3D posture estimation of board which can be utilized for Augmentation reality of 3D objects [8].

In first study, an article following based virtual mouse application has been produced and executed utilizing a webcam. The framework has been executed in MATLAB

IJRITCC | April 2015, Available @ http://www.ijritcc.org

environment utilizing MATLAB Image Processing Toolbox. As an article a blue shading sticker is utilized to make the location simple and quick. Object identification and movement following worked exceptionally well. Utilizing the pointer moving the cursor and the reenacting the mouse click occasions likewise functioned admirably. In any case, framework has a few drawbacks, for example, being invariant to brightening up to some scale, and development of the cursor is extremely touchy to movement. As a result of this reason, to control the cursor, pointer can't be utilized reporting in real time efficiently [2],[3].

In our framework, we can have a precise palm and fingertip positions estimation in view of a hand form. Fore- arm can be incorporated to the form and the framework has a decent toleration to pivot and tilt hand. At the point when the usable picture is created, we apply Pavilidis' calculation [4] to hunt down the forms in the picture. One or more

2178

_______________________________________________________________________________________

International Journal on Recent and Innovation Trends in Computing and Communication Volume: 3 Issue: 4

ISSN: 2321-8169 2174 2179

_______________________________________________________________________________________________

shapes can be discovered, we pick the biggest shape as our hand form. At the point when the hand form has been

picked, we have to compute

the curved

frame of

it

[4].

 

REFERENCES

 

[1]

Kamran Niyazi, Vikram Kumar, Swapnil Mahe, Swapnil

Vyawahare, “Mouse Simulation Using Two Coloured

Tapes”, Department of Computer Science, University of Pune, India, International Journal of Information

Sciences and Techniques (IJIST) Vol.2, No.2, March

2012

[2]

A. Erdem, E. Yardimci, Y. Atalay, V. Cetin, A. E.

“Computer vision based mouse”, Acoustics, Speech, and Signal Processing, 2002. Proceedings. (ICASS). IEEE International Conference [3] Chu-Feng Lien, “Portable Vision-Based HCI A Real- time Hand Mouse System on Handheld Devices”,

National Taiwan University, Computer Science and Information Engineering Department [4] Xingfeng Wang, Kaihuai Qin , “A Six-degree-of-freedom Virtual

Mouse based on Hand Gestures”, 2010 International Conference on Electrical and Control Engineering, pp.

257-260.

[4] Hojoon Park, “A Method For Controlling The MouseMovement using a Real Time Camera”, 2008,

Brown University,Providence ,RI ,USA, Department of computer science

[5]

Utpal V. Solanki, Nilesh H. Desai, “Hand gesture based

remote control for home appliances: Hand mote”, 2011 World Congress on Information and Communication Technologies IEEE, pp 425-429. [6] Gary Bradski, Adrian Kaehler, Learning OpenCV,

O’Reilly Media, 2008.

[7]

Asanterabi Malima, Erol Ozgur, and Mujdat Cetin, a Fast Algorithm for Vision-Based Hand Gesture Recognition for Robot Control

[8]

“A Vision based Application for Virtual Mouse Interface

Using Finger-Tip”, this si anather research paper written by K S Chidanand Kumar ,E&C Department, National Institute of Technology ,Surathkal, Karnataka, India [9] Borgatti, M. Lertora, F. Foret, B. Cali, L. “A reconfigurable system featuring dynamically extensible embedded microprocessor, FPGA, and customizable I/O”. IEEE Journal of Solid-State Circuits, Vol.38 (3):

521- 529, Mar 2003

[9]

http://www.mathworks.com/matlabcentral/fileexchange/2

8757-tracking-red-color-objects-using-matlab

[10] http://www.mathworks.com/help/techdoc/ [11] http://www.mathworks.com/support/solutions/en/data/1-

2X10AT/index.html?solution=1-2X10AT

Three-coin algorithm [4] is connected to get the arched hull

[8].

IJRITCC | April 2015, Available @ http://www.ijritcc.org

2179

_______________________________________________________________________________________