You are on page 1of 37

Latest IEEE Projects

S.No IEEE Project Titles Technology


.
1. MINING FILE DOWNLOADING TIME IN STOCHASTIC PEER TO
PEER NETWORKS

Abstract: On-demand routing protocols use route caches to make routing


decisions. Due to mobility, cached routes easily become stale. To address the
.NET
cache staleness issue, prior work in DSR used heuristics with ad hoc parameters to
predict the lifetime of a link or a route. However, heuristics cannot accurately
estimate timeouts because topology changes are unpredictable. In this paper, we
propose proactively disseminating the broken link information to the nodes that
have that link in their caches. We define a new cache structure called a cache table
and present a distributed cache update algorithm. Each node maintains in its cache
table the information necessary for cache updates. When a link failure is detected,
the algorithm notifies all reachable nodes that have cached the link in a distributed
manner. The algorithm does not use any ad hoc parameters, thus making route
caches fully adaptive to topology changes. We show that the algorithm
outperforms DSR with path caches and with Link-Max Life, an adaptive timeout
mechanism for link caches. We conclude that proactive cache updating is key to
the adaptation of on-demand routing protocols to mobility.
2. QUIVER: CONSISTENT OBJECT SHARING FOR EDGE SERVICES JAVA

Abstract: We present Quiver, a system that coordinates service proxies placed


at the “edge” of the Internet to serve distributed clients accessing a service
involving mutable objects. Quiver enables these proxies to perform consistent
accesses to shared objects by migrating the objects to proxies performing
operations on those objects. These migrations dramatically improve performance
when operations involving an object exhibit geographic locality, since migrating
this object into the vicinity of proxies hosting these operations will benefit all such
operations. This system reduces the workload in the server. It performs the all
operations in the proxies itself. In this system the operations performed in First-In-
First-Out process. This system handles two process serializability and strict
serializabilty for durability in the consistent object sharing . Other workloads
benefit from Quiver, dispersing the computation load across the proxies and saving
the costs of sending operation parameters over the wide area when these are
large. Quiver also supports optimizations for single-object reads that do not involve
migrating the object. We detail the protocols for implementing object operations
and for accommodating the addition, involuntary disconnection, and voluntary
departure of proxies. Finally, we discuss the use of Quiver to build an e-commerce
application and a distributed network traffic modeling service.

3. RATE & DELAY GUARANTEES PROVIDED BY CLOSE PACKET


SWITCHES WITH LOAD BALANCING

Abstract: In this paper, we consider an overarching problem that encompasses


both performance metrics. In particular, we study the network capacity problem
under a given network lifetime requirement. Specifically, for a wireless sensor
network where each node is provisioned with an initial energy, if all nodes are
required to live up to a certain lifetime criterion, Since the objective of maximizing
the sum of rates of all the nodes in the network can lead to a severe bias in rate
allocation among the nodes, we advocate the use of lexicographical max-min JAVA
(LMM) rate allocation. To calculate the LMM rate allocation vector, we develop a
polynomial-time algorithm by exploiting the parametric analysis (PA) technique
from linear program (LP), which we call serial LP with Parametric Analysis (SLP-PA).
We show that the SLP-PA can be also employed to address the LMM node lifetime
problem much more efficiently than a state-of-the-art algorithm proposed in the
literature. More important, we show that there exists an elegant duality
relationship between the LMM rate allocation problem and the LMM node lifetime
problem. Therefore, it is sufficient to solve only one of the two problems. Important
insights can be obtained by inferring duality results for the other problem.
4. GEOMETRIC APPROACH TO IMPROVING ACTIVE PACKET LOSS JAVA
MEASUREMENT

Abstract: Measurement and estimation of packet loss characteristics are


challenging due to the relatively rare occurrence and typically short duration of
packet loss episodes. While active probe tools are commonly used to measure
packet loss on end-to-end paths, there has been little analysis of the accuracy of
these tools or their impact on the network. The objective of our study is to
understand how to measure packet loss episodes accurately with end-to-end
probes. We begin by testing the capability of standard Poisson- modulated end-to-
end measurements of loss in a controlled laboratory environment using IP routers
and commodity end hosts. Our tests show that loss characteristics reported from
such Poisson-modulated probe tools can be quite inaccurate over a range of traffic
conditions. Motivated by these observations, we introduce a new algorithm for
packet loss measurement that is designed to overcome the deficiencies in standard
Poisson-based tools. Specifically, our method entails probe experiments that follow
a geometric distribution to 1) enable an explicit trade-off between accuracy and
impact on the network, and 2) enable more accurate measurements than standard
Poisson probing at the same rate. We evaluate the capabilities of our methodology
experimentally by developing and implementing a prototype tool, called
BADABING. The experiments demonstrate the trade-offs between impact on the
network and measurement accuracy. We show that BADABING reports loss
characteristics far more accurately than traditional loss measurement tools.
5. A PRECISE TERMINATION CONDITION OF THE PROBALASTIC
PACKET MARKING ALGORITHM

Abstract: The probabilistic packet marking (PPM) algorithm is a promising way


to discover the Internet map or an attack graph that the attack packets traversed
during a distributed denial-of-service attack. However, the PPM algorithm is not
perfect, as its termination condition is not well defined in the literature. More
importantly, without a proper termination condition, the attack graph constructed
by the PPM algorithm would be wrong. In this work, we provide a precise
termination condition for the PPM algorithm and name the new algorithm the JAVA
Rectified PPM (RPPM) algorithm. The most significant merit of the RPPM algorithm
is that when the algorithm terminates, the algorithm guarantees that the
constructed attack graph is correct, with a specified level of confidence. We carry
out simulations on the RPPM algorithm and show that the RPPM algorithm can
guarantee the correctness of the constructed attack graph under 1) different
probabilities that a router marks the attack packets and 2) different structures of
the network graph. The RPPM algorithm provides an autonomous way for the
original PPM algorithm to determine its termination, and it is a promising means of
enhancing the reliability of the PPM algorithm.
6. INTRUSION DETECTION IN HOMOGENEOUS & JAVA
HETEROGENEOUS WIRELESS SENSOR NETWORKS

Abstract: Intrusion detection in Wireless Sensor Network (WSN) is of practical


interest in many applications such as detecting an intruder in a battlefield. The
intrusion detection is defined as a mechanism for a WSN to detect the existence of
inappropriate, incorrect, or anomalous moving attackers. In this paper, we consider
this issue according to heterogeneous WSN models. Furthermore, we consider two
sensing detection models: single-sensing detection and multiple-sensing
detection... Our simulation results show the advantage of multiple sensor
heterogeneous WSNs.
7. A DISTRIBUTED AND SCALABLE ROUTING TABLE MANAGER
FOR THE NEXT GENERATION OF IP ROUTERS

In recent years, the exponential growth of Internet users with increased bandwidth
requirements has led to the emergence of the next generation of IP routers.
Distributed architecture is one of the promising trends providing petabit routers
with a large switching capacity and high-speed interfaces. Distributed routers are
designed with an optical switch fabric interconnecting line and control cards.
Computing and memory resources are available on both control and line cards to
perform routing and forwarding tasks. This new hardware architecture is not .NET
efficiently utilized by the traditional software models where a single control card is
responsible for all routing and management operations. The routing table manager
plays an extremely critical role by managing routing information and in particular,
a forwarding information table. This article presents a distributed architecture set
up around a distributed and scalable routing table manager. This architecture also
comes provides improvements in robustness and resiliency. The proposed
architecture

8. WATER MARKING RELATIONAL DATABASE USING .NET


OPTIMIZATION BASED TECHNIQUES

Abstract: Proving ownerships rights on outsourced relational database is a


crucial issue in today's internet based application environments and in many
content distribution applications. In this paper, we present a mechanism for proof
of ownership based on the secure embedding of a robust imperceptible watermark
in relational data. We formulate the watermarking of relational databases as a
constrained optimization problem and discus efficient techniques to solve the
optimization problem and to handle the on straints. Our watermarking technique is
resilient to watermark synchronization errors because it uses a partioning approach
that does not require marker tuple. Our approach overcomes a major weakness in
previously proposed watermarking techniques. Watermark decoding is based on a
threshold-based technique characterized by an optimal threshold that minimizes
the probability of decoding errors. We implemented a proof of concept
implementation of our watermarking technique and showed by experimental
results that our technique is resilient to tuple deletion, alteration and insertion
attacks

9. PERFORMANCE OF A SPECULATIVE TRANSMISSION SCHEME


FOR SCHEDULING LATENCY REDUCTION

Abstract: This work was motivated by the need to achieve low latency in an
input centrally-scheduled cell switch for high-performance computing applications;
specifically, the aim is to reduce the latency incurred between issuance of a
request and arrival of the corresponding grant. We introduce a speculative
transmission scheme to significantly reduce the average latency by allowing cells
to proceed without waiting for a grant. It operates in conjunction with any JAVA
centralized matching algorithm to achieve a high maximum utilization. An
analytical model is presented to investigate the efficiency of the speculative
transmission scheme employed in a non-blocking N*NR input-queued crossbar
switch with receivers R per output. The results demonstrate that the can be almost
entirely eliminated for loads up to 50%. Our simulations confirm the analytical
results.

10. TWO TECHNIQUES FOR FAST COMPUTATION OF CONSTRAINED JAVA


SHORTEST PATH

Abstract: Computing constrained shortest paths is fundamental to some


important network functions such as QoS routing, MPLS path selection, ATM circuit
routing, and traffic engineering. The problem is to find the cheapest path that
satisfies certain constraints. In particular, finding the cheapest delay-constrained
path is critical for real-time data flows such as voice/video calls. Because it is NP-
complete, much research has been designing heuristic algorithms that solve the
-approximation of the problem with an adjustable accuracy. A common approach is
to discretize (i.e., scale and round) the link delay or link cost, which transforms the
original problem to a simpler one solvable in polynomial time. The efficiency of the
algorithms directly relates to the magnitude of the errors introduced during
discretization. In this paper, we propose two techniques that reduce the
discretization errors, which allow faster algorithms to be designed. Reducing the
overhead of computing constrained shortest paths is practically important for the
successful design of a high-throughput QoS router, which is limited at both
processing power and memory space. Our simulations show that the new
algorithms reduce the execution time by an order of magnitude on power-law
topologies with 1000 nodes.

11. A NEW MODEL FOR DISSEMINATION OF XML CONTENT

Abstract: The paper proposes an approach to content dissemination that


exploits the structural properties of an Extensible Markup Language (XML)
document object model in order to provide an efficient dissemination and at the
same time assuring content integrity and confidentiality. Our approach is based on
the notion of encrypted post order numbers that support the integrity and
confidentiality requirements of XML content as well as facilitate efficient
identification, extraction, and distribution of selected content portions. By using
such notion, we develop a structure based routing scheme that prevents
information leaks in the XML data dissemination, and assures that content is
delivered to users according to the access control policies, that is, policies
specifying which users can receive which portions of the contents. Our proposed
dissemination approach further enhances such structure based, policy-based
.NET
routing by combining it with multicast in order to achieve high efficiency in terms
of bandwidth usage and speed of data delivery, thereby enhancing scalability. Our
dissemination approach thus represents an efficient and secure mechanism for use
in applications such as publish–subscribe systems for XML Documents. The publish–
subscribe model restricts the consumer and document source information to the
routers to which they register with. Our framework facilitates dissemination of
contents with varying degrees of confidentiality and integrity requirements in a mix
of trusted and untrusted networks, which is prevalent incurrent settings across
enterprise networks and the web. Also, it does not require the routers to be aware
of any security policy in the sense that the routers do not need to implement any
policy related to access control. Index Terms—Encryption, Extensible Markup
Language
12. EFFICIENT 2-D GRAY SCALE MORPHOLOGICAL .NET
TRANSFORMATIONS WITH ARBITRALY FLAT STRUCTURING
ELEMENTS

Abstract: An efficient algorithm is presented for the computation of grayscale


morphological operations with arbitrary 2-D flat structuring elements (S.E.). The
required computing time is independent of the image content and of the number of
gray levels used. It always outperforms the only existing comparable method,
which
was proposed in the work by Van Droogen broeck and Talbot, by a factor between
3.5 and 35.1, depending on the image type and shape of S.E. So far, filtering using
multiple S.E.s is always done by performing the operator for each size and shape of
the S.E. separately. With our method, filtering with multiple S.E.s can be performed
by
a single operator for a slightly reduced computational cost per size or shape, which
makes this method more suitable for use in granulometries, dilation-erosion scale
spaces, and template matching using the hit-or-miss transform. The discussion
focuses on erosions and dilations, from which other transformations can be
derived.
13. RATE ALLOCATION & NETWORK LIFETIME PROBLEM FOR
WIRELESS SENSOR NETWORKS

Abstract: In this paper, we consider an overarching problem that encompasses


both performance metrics. In particular, we study the network capacity problem
under a given network lifetime requirement. Specifically, for a wireless sensor
network where each node is provisioned with an initial energy, if all nodes are
required to live up to a certain lifetime criterion, Since the objective of maximizing
the sum of rates of all the nodes in the network can lead to a severe bias in rate
allocation among the nodes, we advocate the use of lexicographical max-min
(LMM) rate allocation. To calculate the LMM rate allocation vector, we develop a .NET
polynomial-time algorithm by exploiting the parametric analysis (PA) technique
from linear program (LP), which we call serial LP with Parametric Analysis (SLP-PA).
We show that the SLP-PA can be also employed to address the LMM node lifetime
problem much more efficiently than a state-of-the-art algorithm proposed in the
literature. More important, we show that there exists an elegant duality
relationship between the LMM rate allocation problem and the LMM node lifetime
problem. Therefore, it is sufficient to solve only one of the two problems. Important
insights can be obtained by inferring duality results for the other problem.

14. VISION BASED PROCESSING FOR REAL TIME 3-D DATA .NET
ACQUISITION BASED CODE STRUCTURED LIGHT

Abstract: The structured light vision system is a successfully used for the
measurement of 3D surface in vision. There is some limitation in the above
scheme, that is tens of picture are captured to recover a 3D sense. This paper
presents an idea for real-time Acquisition of 3-D surface data by a specially coded
vision system. To achieve 3-D measurement for a dynamic scene, the data
acquisition must be performed with only a single image. A principle of uniquely
color-encoded pattern projection is proposed to design a color matrix for improving
the reconstruction efficiency. The matrix is produced by a special code sequence
and a number of state transitions. A color projector is controlled by a computer to
generate the desired color patterns in the scene. The unique indexing of the light
codes is crucial here for color projection since it is essential that each light grid be
uniquely identified by incorporating local neighborhoods so that 3-D reconstruction
can be performed with only local analysis of a single image. A scheme is presented
to describe such a vision processing method for fast 3-D data acquisition. Practical
experimental performance is provided to analyze the efficiency of the proposed
methods

15. A SIGNATURE BASED INDEXING METHOD FOR EFFICIENT


CONTENT BASED RETRIEVAL OF RELATIVE TEMPORAL
PATTERNS

Abstract: Project aims for efficient content based retrieval process of relative
temporal pattern using signature based indexing method. Rule discovery
algorithms in data mining generate a large number of patterns/rules, sometimes
even exceeding the size of the underlying database, with only a small fraction
being of interest to the user. It is generally understood that interpreting the
discovered patterns/rules to gain insight into the domain is an important phase in
the knowledge discovery process. However, when there are a large number of
generated rules, identifying and analyzing those that are interesting becomes
difficult. We address the problem of efficiently retrieving subsets of a large
collection of previously discovered temporal patterns. When processing queries on JAVA/J2EE
a small database of temporal patterns, sequential scanning of the patterns
followed by straightforward computations of query conditions is sufficient.
However, as the database grows, this procedure can be too slow, and indexes
should be built to speed up the queries. The problem is to determine what types of
indexes are suitable for improving the speed of queries involving the content of
temporal patterns. We propose a system with signature-based indexing method to
speed up content-based queries on temporal patterns And It’s used to optimize the
storage and retrieval of a large collection of relative temporal patterns. The use of
signature files improves the performance of temporal pattern retrieval. This
retrieval system is currently being combined with visualization techniques for
monitoring the behavior of a single pattern or a group of patterns over time.

16. USING THE CONCEPTUAL COHESION OF CLASSES FOR FAULT JAVA


PREDICTION IN OBJECT ORIENTED SYSTEMS

Abstract: High cohesion is desirable property in software systems to achieve


reusability and maintainability. In this project we are measures for cohesion in
Object-Oriented (OO) software reflect particular interpretations of cohesion and
capture different aspects of it. In existing approaches the cohesion is calculate
from the structural information for example method attributes and references.
In conceptual cohesion of classes, i.e. in our project we are calculating the
unstructured information from the source code such as comments and
identifiers. Unstructured information is embedded in the source code. To retrieve
the unstructured information from the source code Latent Semantic Indexing is
used. A large case study on three open source software systems is presented
which compares the new measure with an extensive set of existing metrics and
uses them to construct models that predict software faults. In our project we are
achieving the high cohesion and we are predicting the fault in Object –Oriented
Systems
17. TRUTH DISCOVERY WITH MULTIPLE CONFLICTING
INFORMATION PROVIDERS ON WEB
Abstract: The world-wide web has become the most important information
source for most of us. Unfortunately, there is no guarantee for the correctness of
information on the web. Moreover, different web sites often provide conflicting in-
formation on a subject, such as different specifications for the same product. In this
paper we propose a new problem called Veracity that is conformity to truth, which
studies how to find true facts from a large amount of conflicting information on
many subjects that is provided by various web sites. We design a general
JAVA/J2EE
framework for the Veracity problem, and invent an algorithm called Truth Finder,
which utilizes the relationships between web sites and their information, i.e., a web
site is trustworthy if it provides many pieces of true information, and a piece of
information is likely to be true if it is provided by many trustworthy web sites. Our
experiments show that Truth Finder successfully finds true facts among conflicting
information, and identifies trustworthy web sites better than the popular search
engines.
18. LOCATION BASED SPATIAL QUERY PROCESSING IN WIRELESS JAVA
BROADCAST ENVIRONMENTS
Abstract: Location-based spatial queries (LBSQ s) refer to spatial queries whose
answers rely on the location of the inquirer. Efficient processing of LBSQ s is of
critical importance with the ever-increasing deployment and use of mobile
technologies. We show that LBSQ s has certain unique characteristics that the
traditional spatial query processing in centralized databases does not address. For
example, a significant challenge is presented by wireless broadcasting
environments, which have excellent scalability but often exhibit high-latency
database access. In this paper, we present a novel query processing technique
that, though maintaining high scalability and accuracy, manages to reduce the
latency considerably in answering LBSQ s. Our approach is based on peer-to-peer
sharing, which enables us to process queries without delay at a mobile host by
using query results cached in its neighboring mobile peers. We demonstrate the
feasibility of our approach through a probabilistic analysis, and we illustrate the
appeal of our technique through extensive simulation results.

19. BANDWIDTH ESTIMATION FOR IEEE 802.11 BASED ADHOC


NETWORK
Abstract: Since 2005, IEEE 802.11-based networks have been able to provide a
certain level of quality of service (QoS) by the means of service differentiation, due
to the IEEE 802.11e amendment. However, no mechanism or method has been
standardized to accurately evaluate the amount of resources remaining on a given
channel. Such an evaluation would, however, be a good asset for bandwidth-
constrained applications. In multihop ad hoc networks, such evaluation becomes JAVA
even more difficult. Consequently, despite the various contributions around this
research topic, the estimation of the available bandwidth still represents one of the
main issues in this field. In this paper, we propose an improved mechanism to
estimate the available bandwidth in IEEE 802.11-based ad hoc networks. Through
simulations, we compare the accuracy of the estimation we propose to the
estimation performed by other state-of-the-art QoS protocols, BRuIT, AAC, and
QoS-AODV.
20. MODELING & AUTOMATED CONTAINMENT OF WORMS JAVA
Abstract: Self-propagating codes, called worms, such as Code Red, Nimda,
and Slammer, have drawn significant attention due to their enormously adverse
impact on the Internet. Thus, there is great interest in the research community in
modeling the spread of worms and in providing adequate defense mechanisms
against them. In this paper, we present a (stochastic) branching process model
for characterizing the propagation of Internet worms. The model is developed for
uniform scanning worms and then extended to preference scanning worms. This
model leads to the development of an automatic worm containment strategy that
prevents the spread of a worm beyond its early stage. Specifically, for uniform
scanning worms, we are able to determine whether the worm spread will
eventually stop. We then extend our results to contain uniform scanning worms.
Our automatic worm containment schemes effectively contain both uniform
scanning worms and local preference scanning worms, and it is validated through
simulations and real trace data to be non intrusive.

21. TRUST WORTHY COMUTING UNDER RESOURCE CONSTRAINTS


WITH THE DOWN POLICY

Abstract: In this project we present a simple way to resolve a complicated


network security. This is done by the following two ways. They are as follows, first
is the decrypt only when necessary (DOWN) policy, which can substantially
improve the ability of low-cost to protect the secrets. The DOWN policy relies on
the ability to operate with fractional parts of secrets. We discuss the feasibility of .NET
extending the DOWN policy to various asymmetric and symmetric cryptographic
primitives. The second is cryptographic authentication strategies which employ
only symmetric cryptographic primitives, based on novel ID-based key pre-
distribution schemes that demand very low complexity of operations to be
performed by the secure coprocessors (ScP) and can take good advantage of the
DOWN policy.
22. BENEFIT-BASED DATA CACHING IN AD HOC NETWORKS
Data caching can significantly improve the efficiency of information access in a
wireless ad hoc network by reducing the access latency and bandwidth usage.
However, designing efficient distributed caching algorithms is non-trivial when
network nodes have limited memory. In this article, we consider the cache
placement problem of minimizing total data access cost in ad hoc networks with
multiple data items and nodes with limited memory capacity. The above
optimization problem is known to be NP-hard. Defining benefit as the reduction in
total access cost, we present a polynomial-time centralized approximation
algorithm that provably delivers a solution whose benefit is at least one-fourth JAVA
(one-half for uniform-size data items) of the optimal benefit. The approximation
algorithm is amenable to localized distributed implementation, which is shown via
simulations to perform close to the approximation algorithm. Our distributed
algorithm naturally extends to networks with mobile nodes. We simulate our
distributed algorithm using a network simulator (ns2), and demonstrate that it
significantly outperforms another existing caching technique (by Yin and Cao [30])
in all important performance metrics. The performance differential is particularly
large in more challenging scenarios, such as higher access frequency and smaller
memory.
23. STATISTICAL TECHNIQUES FOR DETECTING TRAFFIC .NET
ANOMALIES THROUGH PACKET HEADER DATA
Abstract: THE frequent attacks on network infrastructure, using various forms
of denial of service (DoS) attacks and worms, have led to an increased need for
developing techniques for analyzing and monitoring network traffic. If efficient
analysis tools were available, it could become possible to detect the attacks,
anomalies and take action to suppress them before they have had much time to
propagate across the network. In this paper, we study the possibilities of traffic-
analysis based mechanisms for attack and anomaly detection. The motivation for
this work came from a need to reduce the likelihood that an attacker may hijack
the campus machines to stage an attack on a third party. A campus may want to
prevent or limit misuse of its machines in staging attacks, and possibly limit the
liability from such attacks. In particular, we study the utility of observing packet
header data of outgoing traffic, such as destination addresses, port numbers and
the number of flows, in order to detect attacks/anomalies originating from the
campus at the edge of a campus. Detecting anomalies/attacks close to the source
allows us to limit the potential damage close to the attacking machines. Traffic
monitoring close to the source may enable the network operator quicker
identification of potential anomalies and allow better control of administrative
domain’s resources. Attack propagation could be slowed through early detection.
Our approach passively monitors network traffic at regular intervals and analyzes it
to find any abnormalities in the aggregated traffic. By observing the traffic and
correlating it to previous states of traffic, it may be possible to see whether the
current traffic is behaving in a similar (i.e., correlated) manner. The network traffic
could look different because of flash crowds, changing access patterns,
infrastructure problems such as router failures, and DoS attacks. In the case of
bandwidth attacks, the usage of network may be increased and abnormalities may
show up in traffic volume. Flash crowds could be observed through sudden
increase in traffic volume to a single destination. Sudden increase of traffic on a
certain port could signify the onset of an anomaly such as worm propagation. Our
approach relies on analyzing packet header data in order to provide indications of
Possible abnormalities in the traffic.

24. HBA DISTRIBUTED METADATA MANAGEMENT FOR LARGE .NET


SCALE CLUSTER BASED STORAGE SYSTEM

Abstract: An efficient and distributed scheme for file mapping or file lookup is
critical in decentralizing metadata management within a group of metadata
servers, here the technique used called HIERARCHICAL BLOOM FILTER ARRAYS
(HBA) to map filenames to the metadata servers holding their metadata. The
Bloom filter arrays with different levels of accuracies are used on each metadata
server. The first one with low accuracy and used to capture the destination
metadata server information of frequently accessed files. The other array is used to
maintain the destination metadata information of all files. Simulation results show
our HBA design to be highly effective and efficient in improving the performance
and scalability of file systems in clusters with 1,000 to 10,000 nodes (or super
clusters) and with the amount of data in the petabyte scale or higher. HBA is
reducing metadata operation by using the single metadata architecture instead of
16 metadata server.

25. TEMPORAL PORTIONING OF COMMUNICATION RESOURCES IN


AN INTEGRATED ARCHITECTURE
Abstract: Integrated architectures in the automotive and avionic domain
promise improved resource utilization and enable a better coordination of
application subsystems compared to federated systems. An integrated architecture
shares the system’s communication resources by using a single physical network
for exchanging messages of multiple application subsystems. Similarly, the
computational resources (for example, memory and CPU time) of each node
computer are available to multiple software components. In order to support a
seamless system integration without unintended side effects in such an integrated
architecture, it is important to ensure that the software components do not .NET
interfere through the use of these shared resources. For this reason, the DECOS
integrated architecture encapsulates application subsystems and their constituting
software components. At the level of the communication system, virtual networks
on top of an underlying time-triggered physical network exhibit predefined
temporal properties (that is, bandwidth, latency, and latency jitter). Due to
encapsulation, the temporal properties of messages sent by a software component
are independent from the behavior of other software components, in particular
from those within other application subsystems

26. GRID SERVICE DISCOVERY WITH ROUGH SETS .NET


Abstract: A rough set is a formal approximation of a crisp set (conventional
set) in terms of a pair of sets which give the lower and the upper approximation of
the original set. The lower and upper approximation sets themselves are crisp sets
in the standard version of rough set theory, but in other variations, the
approximating sets may be fuzzy sets as well. The computational grid is rapidly
evolving into a service-oriented computing infrastructure that facilitates resource
sharing and large-scale problem solving over the Internet. Service discovery
becomes an issue of vital importance in utilizing grid facilities. This paper presents
ROSSE, a Rough sets-based search engine for grid service discovery. Building on
the Rough sets theory, ROSSE is novel in its capability to deal with the uncertainty
of properties when matching services. In this way, ROSSE can discover the services
that are most relevant to a service query from a functional point of view. Since
functionally matched services may have distinct nonfunctional properties related to
the quality of service (QoS), ROSSE introduces a QoS model to further filter
matched services with their QoS values to maximize user satisfaction in service
discovery.

27. THE EFFECT OF PAIRS IN PROGRAM DESIGN TASKS


In this project efficiency of pairs in program design tasks is identified by
using pair programming concept. Pair programming involves two developers
simultaneously collaborating with each other on the same programming task to
design and code a solution. Algorithm design and its implementation are normally
merged and it provides feedback to enhance the design. Previous controlled pair
programming experiments did not explore the efficacy of pairs against individuals
in program design-related tasks. Variations in programmer skills in a particular
language or an integrated development environment and the understanding of
.NET
programming instructions can cover the skill of subjects in program design-related
tasks. Programming aptitude tests (PATs) have been shown to correlate with
programming performance. PATs do not require understanding of programming
instructions and do not require a skill in any specific computer language. By
conducting two controlled experiments, with full-time professional programmers
being the subjects who worked on increasingly complex programming aptitude
tasks related to problem solving and algorithmic design. In both experiments, pairs
significantly outperformed individuals, providing evidence of the value of pairs in
program design-related tasks.
28. CONSTRUCTING INTER-DOMAIN PACKET FILTERS TO CONTROL JAVA
IP SPOOFING BASED ON BGP UPDATES

Abstract: The Distributed Denial-of-Service (DDoS) attack is a serious threat to


the legitimate use of the Internet. Prevention mechanisms are thwarted by the

ability of attackers to forge or spoof the source addresses in IP packets. By

employing IP spoofing, attackers can evade detection and put a substantial burden
on the destination network for policing attack packets. In this paper, we propose an

inter-domain packet filter (IDPF) architecture that can mitigate the level of IP

spoofing on the Internet. A key feature of our scheme is that it does not require

global routing information. IDPFs are constructed from the information implicit in

Border Gateway Protocol (BGP) route updates and are deployed in network border

routers. We establish the conditions under which the IDPF framework correctly

works in that it does not discard packets with valid source addresses. Based on

extensive simulation studies, we show that, even with partial deployment on the

Internet, IDPFs can proactively limit the spoofing capability of attackers. In

addition, they can help localize the origin of an attack packet to a small number of

candidate networks.

29. ORTHOGONAL DATA EMBEDDING FOR BINARY IMAGES IN J2EE


MORPHOLOGICAL TRANSFORM DOMAIN- A HIGH-CAPACITY
APPROACH

This paper proposes a data-hiding technique for binary images in morphological


transform domain for authentication purpose. To achieve blind watermark
extraction, it is difficult to use the detail coefficients directly as a location map to
determine the data-hiding locations. Hence, we view flipping an edge pixel in
binary images as shifting the edge location one pixel horizontally and vertically.
Based on this observation, we propose an interlaced morphological binary wavelet
transform to track the shifted edges, which thus facilitates blind watermark
extraction and incorporation of cryptographic signature. Unlike existing block-
based approach, in which the block size is constrained by 3times3 pixels or larger,
we process an image in 2times2 pixel blocks. This allows flexibility in tracking the
edges and also achieves low computational complexity. The two processing cases
that flipping the candidates of one does not affect the flippability conditions of
another are employed for orthogonal embedding, which renders more suitable
candidates can be identified such that a larger capacity can be achieved. A novel
effective Backward-Forward Minimization method is proposed, which considers
both backwardly those neighboring processed embeddable candidates and
forwardly those unprocessed flippable candidates that may be affected by flipping
the current pixel. In this way, the total visual distortion can be minimized.
Experimental results demonstrate the validity of our arguments.
30. PROTECTION OF DATABASE SECURITY VIA COLLABORATIVE
INFERENCE DETECTION

Abstract: Malicious users can exploit the correlation among data to infer
sensitive information from a series of seemingly innocuous data accesses. Thus, we
develop an inference violation detection system to protect sensitive data content.
Based on data dependency, database schema and semantic knowledge.
we constructed a semantic inference model (SIM) that represents the
possible inference channels from any attribute to the pre-assigned sensitive
attributes. The SIM is then instantiated to a semantic inference graph (SIG) for
query-time inference violation detection.
For a single user case, when a user poses a query, the detection system will J2EE
examine his/her past query log and calculate the probability of inferring sensitive
information. The query request will be denied if the inference probability exceeds
the pre specified threshold.
For multi-user cases, the users may share their query answers to increase
the inference probability. Therefore, we develop a model to evaluate collaborative
inference based on the query sequences of collaborators and their task-sensitive
collaboration levels.
Experimental studies reveal that information authoritativeness, communication
fidelity and honesty in collaboration are three key factors that affect the level of
achievable collaboration. An example is given to illustrate the use of the proposed
technique to prevent multiple collaborative users from deriving sensitive
information via inference.
31 SECURITY IN LARGE MEDIATOR PROTOCOLS JAVA
The combination of 3AQKDP (implicit) and 3AQKDPMA (explicit) quantum
cryptography is used to provide authenticated secure communication between
sender and receiver.
In quantum cryptography, quantum key distribution protocols (QKDPs) employ
quantum mechanisms to distribute session keys and public discussions to check for
eavesdroppers and verify the correctness of a session key. However, public
discussions require additional communication rounds between a sender and
receiver. The advantage of quantum cryptography easily resists replay and passive
attacks.
A 3AQKDP with implicit user authentication, which ensures that confidentiality is
only possible for legitimate users and mutual authentication is achieved only after
secure communication using the session key start.
In implicit quantum key distribution protocol(3AQKDP) have two phases such as
setup phase and distribution phase to provide three party authentication with
secure session key distribution. In this system there is no mutual understanding
between sender and receiver. Both sender and receiver should communicate over
trusted center.
In explicit quantum key distribution protocol (3AQKDPMA) have two phases such as
setup phase and distribution phase to provide three party authentications with
secure session key distribution. I have mutual understanding between sender and
receiver. Both sender and receiver should communicate directly with
authentication of trusted center.
Disadvantage of separate process 3AQKDP and 3AQKDPMA were provide the
authentication only for message, to identify the security threads in the message.
Not identify the security threads in the session key.

32 ESTIMATION OF DEFECTS BASED ON EFECT DECAY MODEL:


ED3M
An accurate prediction of the number of defects in a software product
during system testing contributes not only to the management of the system
testing process but also to the estimation of the product’s required maintenance.
Here, a new approach, called Estimation of Defects based on Defect Decay Model
(ED3M) is presented that computes an estimate the defects in an ongoing testing
process. ED3M is based on estimation theory. Unlike many existing approaches,
the technique presented here does not depend on historical data from previous
projects or any assumptions about the requirements and/or testers’ productivity. It
is a completely automated approach that relies only on the data collected during .NET
an ongoing testing process. This is a key advantage of the ED3M approach as it
makes it widely applicable in different testing environments. Here, the ED3M
approach has been evaluated using five data sets from large industrial projects and
two data sets from the literature. In addition, a performance analysis has been
conducted using simulated data sets to explore its behavior using different models
for the input data. The results are very promising; they indicate the ED3M
approach provides accurate estimates with as fast or better convergence time in
comparison to well-known alternative techniques, while only using defect data as
the input.

33 ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE


RETRIEVAL
Active learning methods have been considered with increased interest in
the statistical learning community. Initially developed within a classification
framework, a lot of extensions are now being proposed to handle multimedia
applications. This paper provides algorithms within a statistical framework to
extend active learning for online content-based image retrieval (CBIR). The
classification framework is presented with experiments to compare several
powerful classification techniques in this information retrieval context. Focusing on
interactive methods, active learning strategy is then described. The limitations of
this approach for CBIR are emphasized before presenting our new active selection
process RETIN. First, as any active method is sensitive to the boundary estimation
between classes, the RETIN strategy carries out a boundary correction to make the
retrieval process more robust. Second, the criterion of generalization error to
optimize the active learning selection is modified to better represent the CBIR
objective of database ranking. Third, a batch processing of images is proposed. Our
strategy leads to a fast and efficient active learning scheme to retrieve sets of
online images (query concept). Experiments on large databases show that the
RETIN method performs well in comparison to several other active strategies.

34 LOCALIZED SENSOR AREA COVERAGE WITH LOW


COMMUNICATION OVERHEAD

We propose several localized sensor area coverage protocols for


heterogeneous sensors, each with arbitrary sensing and transmission radii. Each
sensor have a time out period and listens to messages sent by respective nodes
before the time out expires. Sensor nodes whose sensing area is not fully covered
(or fully covered but with a disconnected set of active sensors) when the deadline
.NET
expires decide to remain active for the considered round and transmit an activity
message announcing it. In our approach, sensor decides to sleep only if neighbor
sensor is active or not covered. Covered nodes decide to sleep, with or without
transmitting a withdrawal message to inform neighbors about the status. After
hearing from more neighbors, inactive sensors may observe that they became
covered and may decide to alter their original decision and transmit a retreat
message.

35 HARDWARE ENHANCED ASSOCIATION RULE MINING WITH HASHING AND .NET


PIPELINING
Data mining techniques have been widely used in various applications. One of the
most important data mining applications is association rule mining.
Apriori-based association rule mining in hardware, one has to load candidate
itemsets and a database into the hardware.
Since the capacity of the hardware architecture is fixed, if the number of candidate
itemsets or the number of items in the database is larger than the hardware
capacity, the items are loaded into the hardware separately.
The time complexity of those steps that need to load candidate itemsets or
database items into the hardware is in proportion to the number of candidate
itemsets multiplied by the number of items in the database. Too many candidate
itemsets and a large database would create a performance bottleneck.
In this paper, we propose a HAsh-based and PiPelIned (abbreviated as HAPPI)
architecture for hardware enhanced association rule mining. Therefore, we can
effectively reduce the frequency of loading the database into the hardware.
HAPPI solves the bottleneck problem in a priori-based hardware schemes.
36 DUAL-LINK FAILURE RESILIENCY THROUGH BACKUP LINK MUTUAL JAVA
EXCLUSION

Networks employ link protection to achieve fast recovery from link failures. While
the first link failure can be protected using link protection, there are several
alternatives for protecting against the second failure. This paper formally classifies
the approaches to dual-link failure resiliency. One of the strategies to recover from
dual-link failures is to employ link protection for the two failed links independently,
which requires that two links may not use each other in their backup paths if they
may fail simultaneously. Such a requirement is referred to as backup link mutual
exclusion (BLME) constraint and the problem of identifying a backup path for every
link that satisfies the above requirement is referred to as the BLME problem. This
paper develops the necessary theory to establish the sufficient conditions for
existence of a solution to the BLME problem. Solution methodologies for the BLME
problem is developed using two approaches by: 1) formulating the backup path
selection as an integer linear program; 2) developing a polynomial time heuristic
based on minimum cost path routing.

The ILP formulation and heuristic are applied to six networks and their performance
is compared with approaches that assume precise knowledge of dual- link failure. It
is observed that a solution exists for all of the six networks considered. The
heuristic approach is shown to obtain feasible solutions that are resilient to most
dual-link failures, although the backup path lengths may be significantly higher
than optimal. In addition, the paper illustrates the significance of the knowledge of
failure location by illustrating that network with higher connectivity may require
lesser capacity than one with a lower connectivity to recover from dual-link failures

37 A NOVEL FRAMEWORK FOR SEMANTIC ANNOTATION AND


PERSONALIZED RETRIEVAL OF SPORTS VIDEO
Sports video annotation is important for sports video semantic analysis such as
event detection and personalization.
We propose a novel approach for sports video semantic annotation and
personalized retrieval. Different from the state of the art sports video analysis
methods which heavily rely on audio/visual features, the proposed approach
incorporates web-casting text into sports video analysis.

Compared with previous approaches, the contributions of our approach include the
following.
1) The event detection accuracy is significantly improved due to the
incorporation of web-casting text analysis.
.NET
2) The proposed approach is able to detect exact event boundary
and extract event semantics that are very difficult or impossible
to be handled by previous approaches.
3) The proposed method is able to create personalized summary
from both general and specific point of view related to particular
game, event, player or team according to user’s preference.

We present the framework of our approach and details of text analysis, video
analysis, text/video alignment, and personalized retrieval. The experimental results
on event boundary detection in sports video are encouraging and comparable to
the manually selected events. The evaluation on personalized retrieval is effective
in helping meet users’ expectations.

38 EFFICIENT RESOURCE ALLOCATION FOR WIRELESS MULTICAST .NET


In this paper, we propose a bandwidth-efficient multicast mechanism for
heterogeneous wireless networks. We reduce the bandwidth cost of a IP multicast
tree by adaptively selecting the cell and the wireless technology for each mobile
host to join the multicast group. Our mechanism enables more mobile hosts to
cluster together and lead to the use of fewer cells to save the scarce wireless
bandwidth. Besides, the paths in the multicast tree connecting to the selected cells
share more common links to save the wireline bandwidth. Our mechanism supports
the dynamic group membership and offers mobility of group members. Moreover,
our mechanism requires no modification on the current IP multicast routing
protocols. We formulate the selection of the cell and the wireless technology for
each mobile host in the heterogeneous wireless networks as an optimization
problem. We use Integer Linear Programming to model the problem and show that
the problem is NP-hard. To solve the problem, we propose an distributed algorithm
based on Lagrangean relaxation and a network protocol based on the algorithm.
The simulation results show that our mechanism can effectively save the wireless
and wireline bandwidth as compared to the traditional IP multicast.

39 EFFICIENT ROUTING IN INTERMITTENTLY CONNECTED MOBILE NETWORKS:


THE MULTIPLE COPY CASE
Intermittently connected mobile networks are wireless networks where most of the
time there does not exist a complete path from the source to the destination.
There are many real networks that follow this model, for example, wildlife tracking
sensor networks, military networks, vehicular ad hoc networks, etc. In this context,
conventional routing schemes fail, because they try to establish complete end-to-
end paths, before any data is sent. To deal with such networks researchers have
suggested to use flooding-based routing schemes. While flooding-based schemes .NET
have a high probability of delivery, they waste a lot of energy and suffer from
severe contention which can significantly degrade their performance. Furthermore,
proposed efforts to reduce the overhead of flooding-based schemes have often
been plagued by large delays. With this in mind, we introduce a new family of
routing schemes that “spray” a few message copies into the network, and then
route each copy independently towards the destination. We show that, if carefully
designed, spray routing

40 FUZZY CONTROL MODEL OPTIMIZATION FOR BEHAVIOR-


CONSISTENT TRAFFIC ROUTING UNDER INFORMATION
PROVISION

This paper presents an H-infinity filtering approach to optimize a fuzzy control


model used to determine behavior consistent(BC) information-based control
strategies to improve the performance of congested dynamic traffic networks. By
adjusting the associated membership function parameters to better respond to
nonlinearities and modeling errors, the approach is able to enhance the
computational performance of the fuzzy control model. Computational efficiency is
an important aspect in this problem context, because the information strategies
are required in sub real time to be real-time deployable. Experiments are
performed to evaluate the effectiveness of the approach. The results indicate that
the optimized fuzzy control model contributes in determining the BC information-
based control strategies in significantly less computational time than when the
default controller is used. Hence, the proposed H-infinity approach contributes to
the development of an efficient and robust information-based control

Previous Years IEEE Projects

S.No. IEEE Project Titles Year


41. Distributed cache updating for the Dynamic source routing protocol

Abstract: On-demand routing protocols use route caches to make routing decisions. Due to
mobility, cached routes easily become stale. To address the cache staleness issue, prior work in
DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, 2006/Java
heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In
this paper, we propose proactively disseminating the broken link information to the nodes that
have that link in their caches. We define a new cache structure called a cache table and present a
distributed cache update algorithm. Each node maintains in its cache table the information
necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable
nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc
parameters, thus making route caches fully adaptive to topology changes. We show that the
algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout
mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of
on-demand routing protocols to mobility.
42. An Adaptive Programming Model for Fault-Tolerant Distributed Computing

Abstract: The capability of dynamically adapting to distinct runtime conditions is an important


issue when designing distributed systems where negotiated quality of service (QOS) cannot
always be delivered between processes. Providing fault tolerance for such dynamic environments
is a challenging task. Considering such a context, this paper proposes an adaptive programming
model for fault-tolerant distributed computing, which provides upper-layer applications with2007/Java
process state information according to the current system synchrony (or QOS). The underlying
system model is hybrid, composed by a synchronous part (where there are time bounds on
processing speed and message delay) and an asynchronous part (where there is no time bound).
However, such a composition can vary over time, and, in particular, the system may become
totally asynchronous (e.g., when the underlying system QOS degrade) or totally synchronous.
Moreover, processes are not required to share the same view of the system synchrony at a given
time. To illustrate what can be done in this programming model and how to use it, the consensus
problem is taken as a benchmark problem. This paper also presents an implementation of the
model that relies on a negotiated quality of service (QOS) for communication channels.
43. Face Recognition Using Laplacian faces

Abstract: The face recognition is a fairly controversial subject right now. A system such as this
can recognize and track dangerous criminals and terrorists in a crowd, but some contend that it is
an extreme invasion of privacy. The proponents of large-scale face recognition feel that it is a
necessary evil to make our country safer. It could benefit the visually impaired and allow them to
interact more easily with the environment. Also, a computer vision-based authentication system
could be put in place to allow computer access or access to a specific room using face
recognition. Another possible application would be to integrate this technology into an artificial
intelligence system for more realistic interaction with humans.

We propose an appearance-based face recognition method called the Laplacianface approach.


By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace
for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis
(LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding2005/Java
that preserves local information, and obtains a face subspace that best detects the essential face
manifold structure. The Laplacian faces are the optimal linear approximations to the eigen
functions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted
variations resulting from changes in lighting, facial expression, and pose may be eliminated or
reduced.

Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models.
We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on
three different face data sets. Experimental results suggest that the proposed Laplacianface
approach provides a better representation and achieves lower error rates in face recognition.
Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis.
The purpose of PCA is to reduce the large dimensionality of the data space (observed variables)
to the smaller intrinsic dimensionality of feature space (independent variables), which are needed
to describe the data economically. This is the case when there is a strong correlation between
observed variables. The jobs which PCA can do are prediction, redundancy removal, feature
extraction, data compression, etc. Because PCA is a known powerful technique which can do
something in the linear domain, applications having linear models are suitable, such as signal
processing, image processing, system and control theory, communications, etc.

The main idea of using PCA for face recognition is to express the large 1-D vector of pixels
constructed from 2-D face image into the compact principal components of the feature space. This
is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the
covariance matrix derived from a set of fingerprint images (vectors).
44. Predictive Job Scheduling in a Connection Limited System using Parallel
Genetic Algorithm

Abstract: Job scheduling is the key feature of any computing environment and the efficiency of
computing depends largely on the scheduling technique used. Intelligence is the key factor which 2005/Java
is lacking in the job scheduling techniques of today. Genetic algorithms are powerful search
techniques based on the mechanisms of natural selection and natural genetics.

Multiple jobs are handled by the scheduler and the resource the job needs are in remote
locations. Here we assume that the resource a job needs are in a location and not split over nodes
and each node that has a resource runs a fixed number of jobs. The existing algorithms used are
non predictive and employs greedy based algorithms or a variant of it. The efficiency of the job
scheduling process would increase if previous experience and the genetic algorithms are used. In
this paper, we propose a model of the scheduling algorithm where the scheduler can learn from
previous experiences and an effective job scheduling is achieved as time progresses.
45. Digital Image Processing Techniques for the Detection and Removal of
Cracks in Digitized Paintings
2006/.Net
Abstract: An integrated methodology for the detection and removal of cracks on digitized
paintings is presented in this project. The cracks are detected by threshold the output of the
morphological top-hat transform. Afterward, the thin dark brush strokes which have been
misidentified as cracks are removed using either a median radial basis function neural network on
hue and saturation data or a semi-automatic procedure based on region growing. Finally, crack
filling using order statistics filters or controlled anisotropic diffusion is performed. The methodology
has been shown to perform very well on digitized paintings suffering from cracks.
46. A Distributed Database Architecture for Global Roaming in Next-Generation
Mobile Networks

Abstract: The next-generation mobile network will support terminal mobility, personal mobility,
and service provider portability, making global roaming seamless. A location-independent
personal telecommunication number (PTN) scheme is conducive to implementing such a global
mobile system. However, the non-geographic PTNs coupled with the anticipated large number of
mobile users in future mobile networks may introduce very large centralized databases. This 2004/Java
necessitates research into the design and performance of high-throughput database technologies
used in mobile systems to ensure that future systems will be able to carry efficiently the
anticipated loads. This paper proposes a scalable, robust, efficient location database architecture
based on the location-independent PTNs. The proposed multi tree database architecture consists
of a number of database subsystems, each of which is a three-level tree structure and is
connected to the others only through its root. By exploiting the localized nature of calling and
mobility patterns, the proposed architecture effectively reduces the database loads as well as the
signaling traffic incurred by the location registration and call delivery procedures. In addition, two
memory-resident database indices, memory-resident direct file and T-tree, are proposed for the
location databases to further improve their throughput. Analysis model and numerical results are
presented to evaluate the efficiency of the proposed database architecture. Results have revealed
that the proposed database architecture for location management can effectively support the
anticipated high user density in the future mobile networks.
47. Noise Reduction by Fuzzy Image Filtering

Abstract: A new fuzzy filter is presented for the noise reduction of images corrupted with
additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for
eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy 2006/Java
smoothing by weighting the contributions of neighboring pixel values. Both stages are based on
fuzzy rules which make use of membership functions. The filter can be applied iteratively to
effectively reduce heavy noise. In particular, the shape of the membership functions is adapted
according to the remaining noise level after each iteration, making use of the distribution of the
homogeneity in the image. A statistical model for the noise distribution can be incorporated to
relate the homogeneity to the adaptation scheme of the membership functions. Experimental
results are obtained to show the feasibility of the proposed approach. These results are also
compared to other filters by numerical measures and visual inspection.
48. Online Handwritten Script Recognition

Abstract: Automatic identification of handwritten script facilitates many important applications


such as automatic transcription of multilingual documents and search for documents on the Web
containing a particular script. The increase in usage of handheld devices which accept 2004/Java
handwritten input has created a growing demand for algorithms that can efficiently analyze and
retrieve handwritten data. This project proposes a method to classify words and lines in an online
handwritten document into one of the six major scripts: Arabic, Cyrillic, Devnagari, Han, Hebrew,
or Roman. The classification is based on 11 different spatial and temporal features extracted from
the strokes of the words. The proposed system attains an overall classification accuracy of 87.1
percent at the word level with 5-fold cross validation on a data set containing 13,379 words. The
classification accuracy improves to 95 percent as the number of words in the test sample is
increased to five, and to 95.5 percent for complete text lines consisting of an average of seven
words.
49. ODAM: An Optimized Distributed Association Rule Mining Algorithm

Abstract: Association rule mining is an active data mining research area. However, most ARM
algorithms cater to a centralized environment. In contrast to previous ARM algorithms, ODAM is a
distributed algorithm for geographically distributed data sets that reduces communication costs.
Modern organizations are geographically distributed. Typically, each site locally stores its ever-
increasing amount of day-to-day data. Using centralized data mining to discover useful patterns in
such organizations' data isn't always feasible because merging data sets from different sites into a
centralized site incurs huge network communication costs. Data from these organizations are not2004/Java
only distributed over various locations but also vertically fragmented, making it difficult if not
impossible to combine them in a central location. Distributed data mining has thus emerged as an
active sub-area of data mining research.

A significant area of data mining research is association rule mining. Unfortunately, most ARM
algorithms focus on a sequential or centralized environment where no external communication is
required. Distributed ARM algorithms, on the other hand, aim to generate rules from different data
sets spread over various geographical sites; hence, they require external communications
throughout the entire process. DARM algorithms must reduce communication costs so that
generating global association rules costs less than combining the participating sites' data sets into
a centralized site. However, most DARM algorithms don't have an efficient message optimization
technique, so they exchange numerous messages during the mining process. We have developed
a distributed algorithm, called Optimized Distributed Association Mining, for geographically
distributed data sets. ODAM generates support counts of candidate item sets quicker than other
DARM algorithms and reduces the size of average transactions, data sets, and message
exchanges.
50. Protocol Scrubbing: Network Security Through Transparent Flow
Modification
Abstract: This paper describes the design and implementation of protocol scrubbers. Protocol
scrubbers are transparent, interposed mechanisms for explicitly removing network scans and
attacks at various protocol layers. The transport scrubber supports downstream passive network-2004/Java
based intrusion detection systems by converting ambiguous network flows into well-behaved flows
that are unequivocally interpreted by all downstream endpoints. The fingerprint scrubber restricts
an attacker’s ability to determine the operating system of a protected host. As an example, this
paper presents the implementation of a TCP scrubber that eliminates insertion and evasion
attacks—attacks that use ambiguities to subvert detection—on passive network-based intrusion
detection systems, while preserving high performance. The TCP scrubber is based on a novel,
simplified state machine that performs in a fast and scalable manner. The fingerprint scrubber is
built upon the TCP scrubber and removes additional ambiguities from flows that can reveal
implementation-specific details about a host’s operating system.
51. Structure and Texture Filling-In of Missing Image Blocks in Wireless
Transmission and Compression Applications

Abstract: An approach for filling-in blocks of missing data in wireless image


2004/Java
transmission is presented in this paper. When compression algorithms such as
JPEG are used as part of the wireless transmission process, images are first tiled
into blocks of 8 x 8 pixels. When such images are transmitted over fading
channels, the effects of noise can destroy entire blocks of the image. Instead of
using common retransmission query protocols, we aim to reconstruct the lost data
using correlation between the lost block and its neighbors. If the lost block
contained structure, it is reconstructed using an image in painting algorithm, while
texture synthesis is used for the textured blocks. The switch between the two
schemes is done in a fully automatic fashion based on the surrounding available
blocks. The performance of this method is tested for various images and
combinations of lost blocks. The viability of this method for image compression, in
association with loss JPEG, is also discussed.
52. Workflow Mining: Discovering Process Models from Event Logs

Abstract: Contemporary workflow management systems are driven by explicit process models,
i.e., a completely specified workflow design is required in order to enact a given workflow process.
Creating a workflow design is a complicated time-consuming process and, typically, there are2004/.Net
discrepancies between the actual workflow processes and the processes as perceived by the
management. Therefore, we have developed techniques for discovering workflow models. The
starting point for such techniques is a so-called “workflow log” containing information about the
workflow process as it is actually being executed. We present a new algorithm to extract a
process model from such a log and represent it in terms of a Petri net. However, we will also
demonstrate that it is not possible to discover arbitrary workflow processes. In this paper, we
explore a class of workflow processes that can be discovered. We show that the α-algorithm can
successfully mine any workflow represented by a so-called SWF-net.
53. An Agent Based Intrusion Detection, Response and Blocking using
signature method in Active Networks

Abstract: As attackers use automated methods to inflict widespread damage on vulnerable


systems connected to the network, it has become painfully clear that traditional manual methods 2006/Java
of protection do not suffice. This paper discusses an intrusion prevention approach, intrusion
detection, response based on active networks that helps to provide rapid response to vulnerability
advisories. A intrusion detection and intrusion blocker that can provide interim protection against a
limited and changing set of high-likelihood or high-priority threats. It is expected that this
mechanism would be easily and adaptively configured and deployed to keep pace with the ever-
evolving threats on the network, intrusion detection and response based on agent system, digital
signature used to provide a security.

54. A Novel Secure Communication Protocol for Ad Hoc networks [SCP]

Abstract: An ad hoc network is a self organized entity with a number of mobile nodes without
any centralized access point and also there is a topology control problem which leads to high
power consumption and no security, while routing the packets between mobile hosts.2006/Java
Authentication is one of the important security requirements of a communication network. The
common authentication schemes are not applicable in Ad hoc networks. In this paper, we propose
a secure communication protocol for communication between two nodes in ad hoc networks. This
is achieved by using clustering techniques. We present a novel secure communication framework
for ad hoc networks (SCP); which describes authentication and confidentiality when packets are
distributed between hosts with in the cluster and between the clusters. These cluster head nodes
execute administrative functions and network key used for certification. The cluster head nodes
(CHs) perform the major operations to achieve our SCP framework with help of Kerberos
authentication application and symmetric key cryptography technique which will be secure reliable
transparent and scalable and will have less over head.
55. ITP: An Image Transport Protocol for the Internet
Abstract: Images account for a significant and growing fraction of Web downloads. The
traditional approach to transporting images uses TCP, which provides a generic reliable in-order
byte stream abstraction, but which is overly restrictive for image data. We analyze the progression
of image quality at the receiver with time, and show that the in-order delivery abstraction provided
by a TCP-based approach prevents the receiver application from processing and rendering 2002/Java
portions of an image when they actually arrive. The end result is that an image is rendered in
bursts interspersed with long idle times rather than smoothly. This paper describes the design,
implementation, and evaluation of the image transport protocol (ITP) for image transmission over
loss-prone congested or wireless networks. ITP improves user-perceived latency using
application-level framing (ALF) and out-oforder application data unit (ADU) delivery, achieving
significantly better interactive performance as measured by the evolution of peak signal-to-noise
ratio (PSNR) with time at the receiver. ITP runs over UDP, incorporates receiver-driven selective
reliability, uses the congestion manager (CM) to adapt to network congestion, and is customizable
for specific image formats (e.g., JPEG and JPEG2000). ITP enables a variety of new receiver
post-processing algorithms such as error concealment that further improve the interactivity and
responsiveness of reconstructed images. Performance experiments using our implementation
across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of
image downloads at the receiver.
56. Hybrid Intrusion Detection with Weighted Signature Generation over
Anomalous Internet Episodes(HIDS)

Abstract: This paper reports the design principles and evaluation results of a new experimental
hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low
false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly
detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes2007/J2EE
from Internet connections, we build an ADS that detects anomalies beyond the capabilities of
signature-based SNORT or Bro systems. A weighted signature generation scheme is developed
to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts
signatures from the output of ADS and adds them into the SNORT signature database for fast and
accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed
with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data
set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30
percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in
detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS
upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of
detecting intrusions and anomalies, simultaneously, by automated data mining and signature
generation over Internet connection episodes.
57. Incremental deployment service of Hop by hop multicast routing protocol

Abstract: IP multicast is facing a slow take-off although it has been a hotly debated topic for
more than a decade. Many reasons are responsible for this status. Hence, the Internet is likely to
be organized with both unicast and multicast enabled networks. Thus, it is of utmost importance to
design protocols that allow the progressive deployment of the multicast service by supporting
unicast clouds. This paper presents HBH (hop-by-hop multicast routing protocol). HBH adopts the 2006/Java
source-specific channel abstraction to simplify address allocation and implements data distribution
using recursive unicast trees, which allow the transparent support of unicast- only routers. An
important original feature of HBH is its tree construction algorithm that takes into account the
unicast routing asymmetries. Since most multicast routing protocols rely on the unicast
infrastructure, the unicast asymmetries impact the structure of the multicast trees.We show
through simulation that HBH outperforms other multicast routing protocols in terms of the delay
experienced by the receivers and the bandwidth consumption of the multicast trees. Additionally,
we show that HBH can be incrementally deployed and that with a small fraction of HBH-enabled
routers in the network HBH outperforms application-layer multicast.
58. Network border patrol: preventing congestion collapse and promoting
fairness in the Internet

Abstract: The Internet's excellent scalability and robustness result in part from
2004/Java
the end-to-end nature of Internet congestion control. End-to-end congestion
control algorithms alone, however, are unable to prevent the congestion collapse
and unfairness created by applications that are unresponsive to network
congestion. To address these maladies, we propose and investigate a novel
congestion-avoidance mechanism called network border patrol (NBP). NBP
entails the exchange of feedback between routers at the borders of a network in
order to detect and restrict unresponsive traffic flows before they enter the
network, thereby preventing congestion within the network. Moreover, NBP is
complemented with the proposed enhanced core-stateless fair queueing
(ECSFQ) mechanism, which provides fair bandwidth allocations to competing
flows. Both NBP and ECSFQ are compliant with the Internet philosophy of
pushing complexity toward the edges of the network whenever possible.
Simulation results show that NBP effectively eliminates congestion collapse and
that, when combined with ECSFQ, approximately max-min fair bandwidth
allocations can be achieved for competing flows.
59. Application of BPCS steganography to wavelet compressed video

Abstract: This paper presents a steganography method using lossy compressed


video which provides a natural way to send a large amount of secret data. The
proposed method is based on wavelet compression for video data and bit-plane2004/Java
complexity segmentation (BPCS) steganography. In wavelet-based video
compression methods such as 3-D set partitioning in hierarchical trees (SPIHT)
algorithm and motion-JPEG2000, wavelet coefficients in discrete wavelet
transformed video are quantized into a bit-plane structure and therefore BPCS
steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS
steganography and motion-JPEG2000-BPCS steganography are presented and
tested, which are the integration of 3-D SPIHT video coding and BPCS
steganography and that of motion-JPEG2000 and BPCS, respectively.
Experimental results show that 3-D SPIHT-BPCS is superior to motion-
JPEG2000-BPCS with regard to embedding performance.
60. Neural Networks for Handwritten character and Digits

Abstract: This article chronicles the development of an artificial neural network


designed to recognize handwritten digits. Although some theory of neural
networks is given here, it would be better if you already understood some neural
network concepts, like neurons, layers, weights, and backpropagation. The neural
2003/VC++
network described here is not a general-purpose neural network, and it's not
some kind of a neural network workbench. Rather, we will focus on one very
specific neural network (a five-layer convolutional neural network) built for one
very specific purpose (to recognize handwritten digits).

The idea of using neural networks for the purpose of recognizing handwritten digits is not a new
one. The inspiration for the architecture described here comes from articles written by two
separate authors. The first is Dr. Yann LeCun, who was an independent discoverer of the basic
backpropagation algorithm. Dr. LeCun hosts an excellent site on his research into neural
networks. In particular, you should view his "Learning and Visual Perception" section, which uses
animated GIFs to show results of his research. The MNIST database (which provides the
database of handwritten digits) was developed by him. I used two of his publications as primary
source materials for much of my work, and I highly recommend reading his other publications too
(they're posted at his site). Unlike many other publications on neural networks, Dr. LeCun's
publications are not inordinately theoretical and math-intensive; rather, they are extremely
readable, and provide practical insights and explanations.
61. Selective Encryption of Still Image

Abstract: In some applications, it is relevant to hide the content of a message when it enters an
insecure channel. The accepted view among professional cryptographers is that the encryption
algorithm should be published, whereas the key must be kept secret. In the field of image
cryptography, the focus has been put on steganography, and in particular on watermarking during
the last years. Watermarking, as opposed to steganography, has the additional requirement of 2000/VB,C,
robustness against possible image transformations. Watermarks are usually made invisible and
should not be detectable. In applications requiring transmission the image is first compressed, Java
because it saves bandwidth. Then the image is encrypted. There is a need for a technique called
selective encryption of compressed images with messages. Initially it aims of image encryption
and various methods. Usually, during encryption, all the information is encrypted. But this is not
mandatory, only a part of the image content will encrypted with messages in order to be able to
visualize the encrypted Images, although not with full precision. This concept leads to techniques
that can simultaneously provide security functions and an overall visual check which might be
suitable in some applications like searching through a shared image database, distributed
database for image storage etc., The principle of selective encryption is first applied to
compressed images with messages . This technique is proven not to interfere with the de-coding
process in the sense that it achieves a constant bit rate and that bit streams remain compliant to
the JPEG specifications.
62. An Acknowledgment-Based Approach For The Detection Of Routing
Misbehavior In MANETs

Abstract: We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. In
general, routing protocols for MANETs are designed based on the assumption that all participating2007/Java
nodes are fully cooperative. However, due to the open structure and scarcely available battery-
based energy, node misbehaviors may exist. One such routing misbehavior is that some selsh
nodes will participate in the route discovery and maintenance processes but refuse to forward
data packets. In this paper, we propose the 2ACK scheme that serves as an add-on technique for
routing schemes to detect routing misbehavior and to mitigate their adverse effect. The main idea
of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the
routing path. In order to reduce additional routing overhead, only a fraction of the received data
packets are acknowledged in the 2ACK scheme. Analytical and simulation results are presented
to evaluate the performance of the proposed scheme.
63. Neural Network-Based Face Detection

Abstract: We present a neural network-based upright frontal face detection system. A retinal
connected neural network examines small windows of an image, and decides whether each
window contains a face. The system arbitrates between multiple networks to improve performance1998/VC++
over a single network. We present a straightforward procedure for aligning positive face examples
for training. To collect negative examples, we use a bootstrap algorithm, which adds false
detections into the training set as training progresses. This eliminates the difficult task of manually
selecting non face training examples, which must be chosen to span the entire space of non face
images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further
improve the accuracy. Comparisons with several other state-of-the-art face detection systems are
presented; showing that our system has comparable performance in terms of detection and false-
positive rates.
64. Homogenous Network Control and Implementation

Abstract: This project, titled “Homogenous Network Control and Implementation”, presents a
way of developing integrity-preserved computer networks. The proposed generic network is based
on a detailed review and comparative analysis of ongoing research work in the field of
homogenous distributed systems and fault-tolerant systems. The presented network facilitates
easy sharing of information among the systems in the network by establishing a peer to peer2005/Java
network connection among all the systems.

Homogenous Network of Workstations (HNOW systems) comprises of similar kinds of


PC’s, Workstations connected over a single network. In a homogenous network, each machine
has the ability to send data to another machine, irrespective of the working conditions of the
server. In general, a set of networks is classified as homogenous, if the networks are “same”(e.g.,
using the same basic technology, frame format and addressing); a set of networks is classified as
heterogeneous if the set contains network that differ. The theme of the project is centered on the
development of a homogenous network and establishment of process continuation module, which
plays an imperative part in maintaining the network integrity.
65. Retrieving Files Using Content Based Searching and presenting it in
Carousel view

Abstract: The current project is divided into four inter-dependent phases.

Phase 1 deals with designing algorithms for summarizing and indexing text files. In case of
multimedia files the meta data files are created manually by the programmers. This phase also
involves algorithms for converting .doc and .pdf files to .txt format. In this system the searching is
not done at the run time as indexing is done before hand. 2005/Java

In Phase 2 folders would be replaced by a new construct called a library. A library is a virtual
folder that intelligently gathers information about files on the system and presents them to the
users. The concept of folders ceases to exist. Instead, the users are privileged enough to view
similar files together irrespective of their location in the physical memory. This enables retrieval of
files based on various parameters. This concept is named as CAROUSEL VIEW after the
proposed system with the same name to be launched by the Microsoft’s Windows Longhorn which
is a complete revolution in itself.

Phase 3 establishes a common peer to peer (P2P) protocol that enables remote querying over
other terminals in the network. This module allows this software to be used across the internet and
also over various LANs. In a nutshell , this project aims at creating a system which is highly
enhanced over the existing traditional ones and providing a user friendly environment.
66. XTC: A Practical Topology Control Algorithm for Ad-Hoc Networks

Abstract: The XTC AD-HOC network topology control algorithm introduced shows three main2004/Java
advantages over previously proposed algorithms. First, it is extremely simple and strictly local.
Second, it does not assume the network graph to be a unit disk graph; XTC proves correct also on
general weighted network graphs. Third, the algorithm does not require availability of node
position information. Instead, XTC operates with a general notion of order over the neighbors' link
qualities. In the special case of the network graph being a unit disk graph, the resulting topology
proves to have bounded degree, to be a planar graph, and - on average-case graphs - to be a
good spanner.
67. A near-optimal multicast scheme for mobile ad hoc networks using a hybrid
genetic algorithm

Abstract: Multicast routing is an effective way to communicate among multiple hosts in a


network. It outperforms the basic broadcast strategy by sharing resources along general links,
while sending information to a set of predefined multiple destinations concurrently. However, it is
vulnerable to component failure in ad hoc network due to the lack of redundancy, multiple paths2005/Java
and multicast tree structure. Tree graph optimization problems (GOP) are usually difficult and time
consuming NP-hard or NP-complete problems. Genetic algorithms (GA) have been proven to be
an efficient technique for solving the GOP, in which well-designed chromosomes and appropriate
operators are key factors that determine the performance of the GAs. Limited link, path
constraints, and mobility of network hosts make the multicast routing protocol design particularly
challenging in wireless ad hoc networks. Encoding trees is a critical scheme in GAs for solving
these problems because each code should represent a tree. Prufer number is the most
representative method of vertex encoding, which is a string of n-2 integers and can be
transformed to an n-node tree. However, genetic algorithm based on Prufer encoding (GAP) does
not preserve locality, while changing one element of its vector causes dramatically change in its
corresponding tree topology. In this paper, we propose a novel GA based on sequence and
topology encoding (GAST) for multicast protocol is introduced for multicast routing in wireless ad
hoc networks and generalizes the GOP of tree-based multicast protocol as well as three
associated operators. It has revealed an efficient method of the reconstruction of multicast tree
topology and the experimental results demonstrated the effectiveness of GAST compare to GAP

technique.
68. Mobile Agents In Distributed Multimedia Database Systems

Abstract: The size of networks is increasing rapidly and this fact is not straitened to the internet
alone. Many intra and inter–organization networks are affected by this trend, too. A side effect of
this growth is the increase of network traffic. This development leads to new challenges and we
have to think about new technologies. Mobile agent systems are one answer to these challenges.
Mobile agents are an emerging technology attracting interest from the fields of distributed
systems, information retrieval, electronic commerce and artificial intelligence.
2004/Java
A mobile agent is an executing program that can migrate during execution from machine to
machine in a heterogeneous network. On each machine, the agent interacts with stationary
service agents and other resources to accomplish its task, returning to its home site with a final
result when that task is finished. Mobile agents are particularly attractive in distributed information-
retrieval applications. By moving to the location of an information resource, the agent can search
the resource locally, eliminating the transfer of intermediate results across the network and
reducing end-to-end latency. Mobile agents are goal-oriented, can communicate with other
agents, and can continue to operate even after the machine that launched them has been
removed from the network.

The mobile feature enables the agent to travel to the host where the data are physically stored.
This is obviously of great interest in a distributed multimedia database systems where we have in
most cases large binary objects. This Project integrates mobile agent technology in a distributed
database system. The advantage of this approach is the combination of mobile agent features
(e.g. autonomy, mobility, enhancement of functionality) and database services such as recovery,
transaction handling, concurrency and security. This projects aims at facilitating storage and
retrieval of multimedia data from the distributed multimedia database using mobile agents based
on host database which will provide the result to the user upon request.
69. Image Stream Transfer Using Real-Time Transmission Protocol

Abstract: Images account for a significant and growing fraction of Web downloads. The
traditional approach to transporting images uses TCP, which provides a generic reliable in-order
byte-stream abstraction, but which is overly restrictive for image data. We analyze the progression
of image quality at the receiver with time, and show that the in-order delivery abstraction provided
by a TCP-based approach prevents the receiver application from processing and rendering
portions of an image when they actually arrive. The end result is that an image is rendered in 2006/Java
bursts interspersed with long idle times rather than smoothly. This paper describes the design,
implementation, and evaluation of the image transport protocol (ITP) for image transmission over
loss-prone congested or wireless networks. ITP improves user-perceived latency using
application-level framing (ALF) and out-of-order application data unit (ADU) delivery, achieving
significantly better interactive performance as measured by the evolution of peak signal-to-noise
ratio (PSNR) with time at the receiver. ITP runs over UDP, incorporates receiver-driven selective
reliability, uses the congestion manager (CM) to adapt to network congestion, and is customizable
for specific image formats (e.g., JPEG and JPEG2000). ITP enables a variety of new receiver
post-processing algorithms such as error concealment that further improve the interactivity and
responsiveness of reconstructed images. Performance experiments using our implementation
across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of
image downloads at the receiver.
70. Neural Networks for Unicode Optical Character Recognition

Abstract: The central objective of this project is demonstrating the capabilities of Artificial
Neural Network implementations in recognizing extended sets of optical language symbols. The
applications of this technique range from document digitizing and preservation to handwritten text
recognition in handheld devices. The classic difficulty of being able to correctly recognize even
typed optical language symbols is the complex irregularity among pictorial representations of the
same character due to variations in fonts, styles and size. This irregularity undoubtedly widens
when one deals with handwritten characters.

Hence the conventional programming methods of mapping symbol images into matrices,C# .Net
analyzing pixel and/or vector data and trying to decide which symbol corresponds to which
character would yield little or no realistic results. Clearly the needed methodology will be one that
can detect ‘proximity’ of graphic representations to known symbols and make decisions based on
this proximity. To implement such proximity algorithms in the conventional programming one
needs to write endless code, one for each type of possible irregularity or deviation from the
assumed output either in terms of pixel or vector parameters, clearly not a realistic fare. An
emerging technique in this particular application area is the use of Artificial Neural Network
implementations with networks employing specific guides (learning rules) to update the links
(weights) between their nodes. Such networks can be fed the data from the graphic analysis of the
input picture and trained to output characters in one or another form. Specifically some network
models use a set of desired outputs to compare with the output and compute an error to make use
of in adjusting their weights. Such learning rules are termed as Supervised Learning.

One such network with supervised learning rule is the Multi-Layer Perceptron (MLP) model. It
uses the Generalized Delta Learning Rule for adjusting its weights and can be trained for a set of
input/desired output values in a number of iterations. The very nature of this particular model is
that it will force the output to one of nearby values if a variation of input is fed to the network that it
is not trained for, thus solving the proximity issue. Both concepts will be discussed in the
introduction part of this report.The project has employed the MLP technique mentioned and
excellent results were obtained for a number of widely used font types. The technical approach
followed in processing input images, detecting graphic symbols, analyzing and mapping the
symbols and training the network for a set of desired Unicode characters corresponding to the
input images are discussed in the subsequent sections. Even though the implementation might
have some limitations in terms of functionality and robustness, the researcher is confident that it
fully serves the purpose of addressing the desired objectives.

You might also like