Professional Documents
Culture Documents
PROJECT PROPOSAL
BY
Ajeigbe Saheed
Olasunkanmi
(15/68HE008)
DEPARTMENT OF COMPUTER
SCIENCE
UNIVERSITY OF ILORIN
ILORIN, NIGERIA
March 2016
Final Year Project Proposal
.
By
Ajeigbe Saheed
Olasunkanmi
(15/68HE008)
CHAPTER ONE
INTRODUCTION
1.1
Managing people is a difficult task for most of the organizations and maintaining the attendance
record is an important factor in people management. When considering academic institutes,
taking the attendance of students on daily basis and maintaining the records is a major task.
Every academic institution has certain criteria for students regarding their attendance in class. In
most Institutions of higher learning, eligibility for examinations is based on fulfillment of a
minimum lecture attendance requirement. It is therefore very important to keep accurate records
of student attendance.
However, this academic policy has not been fully functional due to limitations posed by the
classical attendance method currently in use. The usual practice is that students are given sheets
of paper to write down their names, matriculation number and signature. This manual method of
taking attendance is obviously not effective. The use of attendance sheets becomes cumbersome
and untidy as the population of students increases, is time consuming and a waste of human and
material resources. The stress associated with manual calculation of student attendance rate has
made it impossible to fully implement the use of percentage attendance in lecture as a factor in
authenticating student access into examination venues. Also, high level of impersonation has
been known to characterize this method of attendance as students can cheat by asking their
friends to write attendance for them.
Consequently, it is very difficult to manage the attendance and determine whether each student
meets up with the required lecture attendance. As a result of the flaws in the classical method of
taking attendance, there is need for faster, easier, more accurate and effective method for
managing attendance.
Technological improvements have been useful tools in the development of new methods such as
the use of Barcode readers, Radio Frequency Identification (RFID), Bluetooth Systems, etc.
These tools were however expensive and had limited use.
Fingerprint recognition is the most popular and mature biometric system used today. In addition
to meeting the criteria for a good biometric system, fingerprint recognition systems perform well
(that is, they are accurate, fast, and robust), they are publicly acceptable and they are hard to
circumvent [1]. Among biometric traits, fingerprint is widely accepted by people because of its
uniqueness and immutability [2].
Fingerprint verification is a very convenient and reliable way to verify the persons identity. It is
believed that no two people have identical fingerprint in this world, so, the fingerprint
verification and identification is the most popular way to verify the authenticity or identity of a
person. Out of all the variety of the biometric technologies for the information security solutions
the best appropriate seems to be the use of the systems based on fingerprints scanning and
recognition. This method, in comparison with others is cheaper, more convenient in day to day
use, and is known to have very low false acceptance rate [3].
In order to rectify these systematic failings in the traditional methods of taking attendance, this
work seeks to shift paradigm from these referred methods by formulating and implementing a
736
simplified and cost effective model of fingerprint-based method for managing time and attendance of students. It has been
proved over the years that fingerprints of each and every person are unique [4]. So it helps to uniquely identify the students.
Before entering classrooms, student identities are verified through electronic fingerprint scanners that will read student
fingerprint and send the data to a PC. The PC, in turn, sends data in form of student information and attendance record
(course, time etc.) to the server immediately. This means no class time will be wasted.
1. To provide a more accurate and reliable user authentication method for identification
tracking of staff.
and
CHAPTER TWO
LITERATURE REVIEW
2.2 History
The concept of speech recognition started somewhere in 1940s, practically the
first speech recognition program was appeared in 1952 at the bell labs, that was
about recognition of a digit in a noise free environment.
1940s and 1950s consider as the foundational period of the speech
recognition technology, in this period work was done on the foundational
paradigms of the speech recognition that is automation and information
theoretic models.
In the 1960s we were able to recognize small vocabularies (order of 10100words) of isolated words, based on simple acoustic-phonetic
properties of speech sounds. The key technologies that were developed
during this decade were, filter banks and time normalization methods
In 1970s the medium vocabularies (order of 100-1000 words) using
simple template-based, pattern recognition methods were recognized.
In 1980s large vocabularies (1000-unlimited) were used and speech
recognition problems based on statistical, with a large range of networks
for handling 4language structures were addressed. The key invention of
this era were hidden markov model (HMM) and the stochastic language
model, which together enabled powerful new methods for handling
continuous speech recognition problem efficiently and with high
performance.
In 1990s the key technologies developed during this period were the
methods for stochastic language understanding, statistical learning of
acoustic and language models, and the methods for implementation of
large vocabulary speech understanding systems.
After the five decades of research, the speech recognition technology has
finally entered marketplace, benefiting the users in variety of ways. The
challenge of designing a machine that truly functions like an intelligent
human is still a major one going forward.
recognition engine to recognize speech .The software acoustic model breaks the
words into the phonemes.
Language Model: Language modelling is used in many natural language
processing applications such as speech recognition tries to capture the properties
of a language and to predict the next word in the speech sequence. The software
language model compares the phonemes to words in its built in dictionary.
Speech engine: The job of speech recognition engine is to convert the input
audio into text; to accomplish this it uses all sorts of data, software algorithms
and statistics. Its first operation is digitization as discussed earlier, that is to
convert it into a suitable format for further processing. Once audio signal is in
proper format it then searches the best match for it. It does this by considering
the words it knows, once the signal is recognized it returns its corresponding
text string.
2.6 Applications
2.6.1 from bank perspective
People with disabilities can benefit from speech recognition programs. Speech
recognition is especially useful for people who have difficulty using their hands,
in such cases speech recognition programs are much beneficial and they can use
for operating ATM. Speech recognition is used in deaf telephony, such as
voicemail to text.
requires help for separating the speech sound from the other sounds. Few
factors that are considerable in this regard are:
Homonyms:
Are the words that are differently spelled and have the different meaning but
acquires the same meaning, for example there their be and bee. This is
a challenge for computer machine to distinguish between such types of phrases
that sound alike.
Overlapping speeches:
A second challenge in the process, is to understand the speech uttered by
different users, current systems have a difficulty to separate simultaneous
speeches from multiple users.
Noise factor:
The program requires hearing the words uttered by a human distinctly and
clearly. Any extra sound can create interference, first you need to place system
away from noisy environments and then speak clearly else the machine will
confuse and will mix up the words.
2.9.1 XVoice
XVoice is a dictation/continuous speech recognizer that can be used with a
variety of XWindow applications. This software is primarily for users.
2.9.2 ISIP
The Institute for Signal and Information Processing at Mississippi State
University has made its speech recognition engine available. The toolkit
includes a frontend, a decoder, and a training module. It's a functional toolkit.
This software is primarily for developers. The toolkit (and more information
about ISIP) is available at: http://www.isip.msstate.edu/project/speech
2.9.3 Ears
Although Ears isn't fully developed, it is a good starting point for programmers
wishing to start in ASR. This software is primarily for developers.
2.9.4 CMU Sphinix
Sphinx originally started at CMU and has recently been released as open source.
This is a fairly large program that includes a lot of tools and information. It is
still "in development", but includes trainers, recognizers, acoustic models,
language models, and some limited documentation. This software is primarily
for developers.
2.9.5 NICO ANN Toolkit
The NICO Artificial Neural Network toolkit is a flexible back propagation
neural network toolkit optimized for speech recognition applications. This
software is primarily for developers
CHAPTER THREE
METHODOLOGY AND TOOLS
b.
c.
From the point of view of application, there are two broad categories of
research:
a.
Basic research
b.
Applied research.
Basic research involves developing and testing theories and hypotheses that are
intellectually challenging to the researcher but May or may not have practical
application at the present time or in the future. The knowledge produced
through basic research is sought in order to add to the existing body of new
knowledge.
Applied research is done to solve specific, practical questions; for policy
formulation, administration and understanding of a phenomenon. It can be
exploratory, but is usually descriptive. Applied research can be carried out by
academic or industrial institutions. This study used the applied research method.
Consequently, the emphasis on data analysis is that the study employs the use of
secondary source of gathering information (Data). This however means that
several writers and experts opinions and works will bereffered and their
opinion be critically examined in order to reach some sort of contemporary
consensus.
includes;
Control to be incorporated within program(example data validation)
Control of access to the new system
Recovery procedures, in case processing are lost
User requirement identified in the problem definition list and the audit
security and control requirement mentioned above and consolidated.
Theft of ATM cards and easy access to cash once password is available.
2.
The existing system is time consuming which is not good for security
consciousness.
3.
The quality of assurance of the existing ATM withdrawal system is not
consistent and reliable.
1.
4.
The effect of all these problems results in conflicts between banks and
customer.
6
It was gathered during the interview that the security personnel are meant to
keep a consistent and accurate record of security matters including reports for
ATM usage by clients. However, it was also gathered that these records are
sometimes inaccurate and inconsistent as they are not able to track ATM frauds
and criminals. Inconsistent reports will always lead to variations that affect
bank records negatively, thereby, defeating the aim of setting up such a system.
This leads to the need of a more accurate system that would help in keeping
consistent and accurate record.
3.7
PROPOSED WORK