Professional Documents
Culture Documents
In recent years, the exponential growth of Internet users with increased bandwidth
requirements has led to the emergence of the next generation of IP routers.
Distributed architecture is one of the promising trends providing petabit routers
with a large switching capacity and high-speed interfaces. Distributed routers are
designed with an optical switch fabric interconnecting line and control cards.
Computing and memory resources are available on both control and line cards to
perform routing and forwarding tasks. This new hardware architecture is not .NET
efficiently utilized by the traditional software models where a single control card is
responsible for all routing and management operations. The routing table manager
plays an extremely critical role by managing routing information and in particular,
a forwarding information table. This article presents a distributed architecture set
up around a distributed and scalable routing table manager. This architecture also
comes provides improvements in robustness and resiliency. The proposed
architecture
Abstract: This work was motivated by the need to achieve low latency in an
input centrally-scheduled cell switch for high-performance computing applications;
specifically, the aim is to reduce the latency incurred between issuance of a
request and arrival of the corresponding grant. We introduce a speculative
transmission scheme to significantly reduce the average latency by allowing cells
to proceed without waiting for a grant. It operates in conjunction with any JAVA
centralized matching algorithm to achieve a high maximum utilization. An
analytical model is presented to investigate the efficiency of the speculative
transmission scheme employed in a non-blocking N*NR input-queued crossbar
switch with receivers R per output. The results demonstrate that the can be almost
entirely eliminated for loads up to 50%. Our simulations confirm the analytical
results.
14. VISION BASED PROCESSING FOR REAL TIME 3-D DATA .NET
ACQUISITION BASED CODE STRUCTURED LIGHT
Abstract: The structured light vision system is a successfully used for the
measurement of 3D surface in vision. There is some limitation in the above
scheme, that is tens of picture are captured to recover a 3D sense. This paper
presents an idea for real-time Acquisition of 3-D surface data by a specially coded
vision system. To achieve 3-D measurement for a dynamic scene, the data
acquisition must be performed with only a single image. A principle of uniquely
color-encoded pattern projection is proposed to design a color matrix for improving
the reconstruction efficiency. The matrix is produced by a special code sequence
and a number of state transitions. A color projector is controlled by a computer to
generate the desired color patterns in the scene. The unique indexing of the light
codes is crucial here for color projection since it is essential that each light grid be
uniquely identified by incorporating local neighborhoods so that 3-D reconstruction
can be performed with only local analysis of a single image. A scheme is presented
to describe such a vision processing method for fast 3-D data acquisition. Practical
experimental performance is provided to analyze the efficiency of the proposed
methods
Abstract: Project aims for efficient content based retrieval process of relative
temporal pattern using signature based indexing method. Rule discovery
algorithms in data mining generate a large number of patterns/rules, sometimes
even exceeding the size of the underlying database, with only a small fraction
being of interest to the user. It is generally understood that interpreting the
discovered patterns/rules to gain insight into the domain is an important phase in
the knowledge discovery process. However, when there are a large number of
generated rules, identifying and analyzing those that are interesting becomes
difficult. We address the problem of efficiently retrieving subsets of a large
collection of previously discovered temporal patterns. When processing queries on JAVA/J2EE
a small database of temporal patterns, sequential scanning of the patterns
followed by straightforward computations of query conditions is sufficient.
However, as the database grows, this procedure can be too slow, and indexes
should be built to speed up the queries. The problem is to determine what types of
indexes are suitable for improving the speed of queries involving the content of
temporal patterns. We propose a system with signature-based indexing method to
speed up content-based queries on temporal patterns And It’s used to optimize the
storage and retrieval of a large collection of relative temporal patterns. The use of
signature files improves the performance of temporal pattern retrieval. This
retrieval system is currently being combined with visualization techniques for
monitoring the behavior of a single pattern or a group of patterns over time.
Abstract: An efficient and distributed scheme for file mapping or file lookup is
critical in decentralizing metadata management within a group of metadata
servers, here the technique used called HIERARCHICAL BLOOM FILTER ARRAYS
(HBA) to map filenames to the metadata servers holding their metadata. The
Bloom filter arrays with different levels of accuracies are used on each metadata
server. The first one with low accuracy and used to capture the destination
metadata server information of frequently accessed files. The other array is used to
maintain the destination metadata information of all files. Simulation results show
our HBA design to be highly effective and efficient in improving the performance
and scalability of file systems in clusters with 1,000 to 10,000 nodes (or super
clusters) and with the amount of data in the petabyte scale or higher. HBA is
reducing metadata operation by using the single metadata architecture instead of
16 metadata server.
employing IP spoofing, attackers can evade detection and put a substantial burden
on the destination network for policing attack packets. In this paper, we propose an
inter-domain packet filter (IDPF) architecture that can mitigate the level of IP
spoofing on the Internet. A key feature of our scheme is that it does not require
global routing information. IDPFs are constructed from the information implicit in
Border Gateway Protocol (BGP) route updates and are deployed in network border
routers. We establish the conditions under which the IDPF framework correctly
works in that it does not discard packets with valid source addresses. Based on
extensive simulation studies, we show that, even with partial deployment on the
addition, they can help localize the origin of an attack packet to a small number of
candidate networks.
Abstract: Malicious users can exploit the correlation among data to infer
sensitive information from a series of seemingly innocuous data accesses. Thus, we
develop an inference violation detection system to protect sensitive data content.
Based on data dependency, database schema and semantic knowledge.
we constructed a semantic inference model (SIM) that represents the
possible inference channels from any attribute to the pre-assigned sensitive
attributes. The SIM is then instantiated to a semantic inference graph (SIG) for
query-time inference violation detection.
For a single user case, when a user poses a query, the detection system will J2EE
examine his/her past query log and calculate the probability of inferring sensitive
information. The query request will be denied if the inference probability exceeds
the pre specified threshold.
For multi-user cases, the users may share their query answers to increase
the inference probability. Therefore, we develop a model to evaluate collaborative
inference based on the query sequences of collaborators and their task-sensitive
collaboration levels.
Experimental studies reveal that information authoritativeness, communication
fidelity and honesty in collaboration are three key factors that affect the level of
achievable collaboration. An example is given to illustrate the use of the proposed
technique to prevent multiple collaborative users from deriving sensitive
information via inference.
31 SECURITY IN LARGE MEDIATOR PROTOCOLS JAVA
The combination of 3AQKDP (implicit) and 3AQKDPMA (explicit) quantum
cryptography is used to provide authenticated secure communication between
sender and receiver.
In quantum cryptography, quantum key distribution protocols (QKDPs) employ
quantum mechanisms to distribute session keys and public discussions to check for
eavesdroppers and verify the correctness of a session key. However, public
discussions require additional communication rounds between a sender and
receiver. The advantage of quantum cryptography easily resists replay and passive
attacks.
A 3AQKDP with implicit user authentication, which ensures that confidentiality is
only possible for legitimate users and mutual authentication is achieved only after
secure communication using the session key start.
In implicit quantum key distribution protocol(3AQKDP) have two phases such as
setup phase and distribution phase to provide three party authentication with
secure session key distribution. In this system there is no mutual understanding
between sender and receiver. Both sender and receiver should communicate over
trusted center.
In explicit quantum key distribution protocol (3AQKDPMA) have two phases such as
setup phase and distribution phase to provide three party authentications with
secure session key distribution. I have mutual understanding between sender and
receiver. Both sender and receiver should communicate directly with
authentication of trusted center.
Disadvantage of separate process 3AQKDP and 3AQKDPMA were provide the
authentication only for message, to identify the security threads in the message.
Not identify the security threads in the session key.
Networks employ link protection to achieve fast recovery from link failures. While
the first link failure can be protected using link protection, there are several
alternatives for protecting against the second failure. This paper formally classifies
the approaches to dual-link failure resiliency. One of the strategies to recover from
dual-link failures is to employ link protection for the two failed links independently,
which requires that two links may not use each other in their backup paths if they
may fail simultaneously. Such a requirement is referred to as backup link mutual
exclusion (BLME) constraint and the problem of identifying a backup path for every
link that satisfies the above requirement is referred to as the BLME problem. This
paper develops the necessary theory to establish the sufficient conditions for
existence of a solution to the BLME problem. Solution methodologies for the BLME
problem is developed using two approaches by: 1) formulating the backup path
selection as an integer linear program; 2) developing a polynomial time heuristic
based on minimum cost path routing.
The ILP formulation and heuristic are applied to six networks and their performance
is compared with approaches that assume precise knowledge of dual- link failure. It
is observed that a solution exists for all of the six networks considered. The
heuristic approach is shown to obtain feasible solutions that are resilient to most
dual-link failures, although the backup path lengths may be significantly higher
than optimal. In addition, the paper illustrates the significance of the knowledge of
failure location by illustrating that network with higher connectivity may require
lesser capacity than one with a lower connectivity to recover from dual-link failures
Compared with previous approaches, the contributions of our approach include the
following.
1) The event detection accuracy is significantly improved due to the
incorporation of web-casting text analysis.
.NET
2) The proposed approach is able to detect exact event boundary
and extract event semantics that are very difficult or impossible
to be handled by previous approaches.
3) The proposed method is able to create personalized summary
from both general and specific point of view related to particular
game, event, player or team according to user’s preference.
We present the framework of our approach and details of text analysis, video
analysis, text/video alignment, and personalized retrieval. The experimental results
on event boundary detection in sports video are encouraging and comparable to
the manually selected events. The evaluation on personalized retrieval is effective
in helping meet users’ expectations.
Abstract: On-demand routing protocols use route caches to make routing decisions. Due to
mobility, cached routes easily become stale. To address the cache staleness issue, prior work in
DSR used heuristics with ad hoc parameters to predict the lifetime of a link or a route. However, 2006/Java
heuristics cannot accurately estimate timeouts because topology changes are unpredictable. In
this paper, we propose proactively disseminating the broken link information to the nodes that
have that link in their caches. We define a new cache structure called a cache table and present a
distributed cache update algorithm. Each node maintains in its cache table the information
necessary for cache updates. When a link failure is detected, the algorithm notifies all reachable
nodes that have cached the link in a distributed manner. The algorithm does not use any ad hoc
parameters, thus making route caches fully adaptive to topology changes. We show that the
algorithm outperforms DSR with path caches and with Link-Max Life, an adaptive timeout
mechanism for link caches. We conclude that proactive cache updating is key to the adaptation of
on-demand routing protocols to mobility.
42. An Adaptive Programming Model for Fault-Tolerant Distributed Computing
Abstract: The face recognition is a fairly controversial subject right now. A system such as this
can recognize and track dangerous criminals and terrorists in a crowd, but some contend that it is
an extreme invasion of privacy. The proponents of large-scale face recognition feel that it is a
necessary evil to make our country safer. It could benefit the visually impaired and allow them to
interact more easily with the environment. Also, a computer vision-based authentication system
could be put in place to allow computer access or access to a specific room using face
recognition. Another possible application would be to integrate this technology into an artificial
intelligence system for more realistic interaction with humans.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models.
We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on
three different face data sets. Experimental results suggest that the proposed Laplacianface
approach provides a better representation and achieves lower error rates in face recognition.
Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis.
The purpose of PCA is to reduce the large dimensionality of the data space (observed variables)
to the smaller intrinsic dimensionality of feature space (independent variables), which are needed
to describe the data economically. This is the case when there is a strong correlation between
observed variables. The jobs which PCA can do are prediction, redundancy removal, feature
extraction, data compression, etc. Because PCA is a known powerful technique which can do
something in the linear domain, applications having linear models are suitable, such as signal
processing, image processing, system and control theory, communications, etc.
The main idea of using PCA for face recognition is to express the large 1-D vector of pixels
constructed from 2-D face image into the compact principal components of the feature space. This
is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the
covariance matrix derived from a set of fingerprint images (vectors).
44. Predictive Job Scheduling in a Connection Limited System using Parallel
Genetic Algorithm
Abstract: Job scheduling is the key feature of any computing environment and the efficiency of
computing depends largely on the scheduling technique used. Intelligence is the key factor which 2005/Java
is lacking in the job scheduling techniques of today. Genetic algorithms are powerful search
techniques based on the mechanisms of natural selection and natural genetics.
Multiple jobs are handled by the scheduler and the resource the job needs are in remote
locations. Here we assume that the resource a job needs are in a location and not split over nodes
and each node that has a resource runs a fixed number of jobs. The existing algorithms used are
non predictive and employs greedy based algorithms or a variant of it. The efficiency of the job
scheduling process would increase if previous experience and the genetic algorithms are used. In
this paper, we propose a model of the scheduling algorithm where the scheduler can learn from
previous experiences and an effective job scheduling is achieved as time progresses.
45. Digital Image Processing Techniques for the Detection and Removal of
Cracks in Digitized Paintings
2006/.Net
Abstract: An integrated methodology for the detection and removal of cracks on digitized
paintings is presented in this project. The cracks are detected by threshold the output of the
morphological top-hat transform. Afterward, the thin dark brush strokes which have been
misidentified as cracks are removed using either a median radial basis function neural network on
hue and saturation data or a semi-automatic procedure based on region growing. Finally, crack
filling using order statistics filters or controlled anisotropic diffusion is performed. The methodology
has been shown to perform very well on digitized paintings suffering from cracks.
46. A Distributed Database Architecture for Global Roaming in Next-Generation
Mobile Networks
Abstract: The next-generation mobile network will support terminal mobility, personal mobility,
and service provider portability, making global roaming seamless. A location-independent
personal telecommunication number (PTN) scheme is conducive to implementing such a global
mobile system. However, the non-geographic PTNs coupled with the anticipated large number of
mobile users in future mobile networks may introduce very large centralized databases. This 2004/Java
necessitates research into the design and performance of high-throughput database technologies
used in mobile systems to ensure that future systems will be able to carry efficiently the
anticipated loads. This paper proposes a scalable, robust, efficient location database architecture
based on the location-independent PTNs. The proposed multi tree database architecture consists
of a number of database subsystems, each of which is a three-level tree structure and is
connected to the others only through its root. By exploiting the localized nature of calling and
mobility patterns, the proposed architecture effectively reduces the database loads as well as the
signaling traffic incurred by the location registration and call delivery procedures. In addition, two
memory-resident database indices, memory-resident direct file and T-tree, are proposed for the
location databases to further improve their throughput. Analysis model and numerical results are
presented to evaluate the efficiency of the proposed database architecture. Results have revealed
that the proposed database architecture for location management can effectively support the
anticipated high user density in the future mobile networks.
47. Noise Reduction by Fuzzy Image Filtering
Abstract: A new fuzzy filter is presented for the noise reduction of images corrupted with
additive noise. The filter consists of two stages. The first stage computes a fuzzy derivative for
eight different directions. The second stage uses these fuzzy derivatives to perform fuzzy 2006/Java
smoothing by weighting the contributions of neighboring pixel values. Both stages are based on
fuzzy rules which make use of membership functions. The filter can be applied iteratively to
effectively reduce heavy noise. In particular, the shape of the membership functions is adapted
according to the remaining noise level after each iteration, making use of the distribution of the
homogeneity in the image. A statistical model for the noise distribution can be incorporated to
relate the homogeneity to the adaptation scheme of the membership functions. Experimental
results are obtained to show the feasibility of the proposed approach. These results are also
compared to other filters by numerical measures and visual inspection.
48. Online Handwritten Script Recognition
Abstract: Association rule mining is an active data mining research area. However, most ARM
algorithms cater to a centralized environment. In contrast to previous ARM algorithms, ODAM is a
distributed algorithm for geographically distributed data sets that reduces communication costs.
Modern organizations are geographically distributed. Typically, each site locally stores its ever-
increasing amount of day-to-day data. Using centralized data mining to discover useful patterns in
such organizations' data isn't always feasible because merging data sets from different sites into a
centralized site incurs huge network communication costs. Data from these organizations are not2004/Java
only distributed over various locations but also vertically fragmented, making it difficult if not
impossible to combine them in a central location. Distributed data mining has thus emerged as an
active sub-area of data mining research.
A significant area of data mining research is association rule mining. Unfortunately, most ARM
algorithms focus on a sequential or centralized environment where no external communication is
required. Distributed ARM algorithms, on the other hand, aim to generate rules from different data
sets spread over various geographical sites; hence, they require external communications
throughout the entire process. DARM algorithms must reduce communication costs so that
generating global association rules costs less than combining the participating sites' data sets into
a centralized site. However, most DARM algorithms don't have an efficient message optimization
technique, so they exchange numerous messages during the mining process. We have developed
a distributed algorithm, called Optimized Distributed Association Mining, for geographically
distributed data sets. ODAM generates support counts of candidate item sets quicker than other
DARM algorithms and reduces the size of average transactions, data sets, and message
exchanges.
50. Protocol Scrubbing: Network Security Through Transparent Flow
Modification
Abstract: This paper describes the design and implementation of protocol scrubbers. Protocol
scrubbers are transparent, interposed mechanisms for explicitly removing network scans and
attacks at various protocol layers. The transport scrubber supports downstream passive network-2004/Java
based intrusion detection systems by converting ambiguous network flows into well-behaved flows
that are unequivocally interpreted by all downstream endpoints. The fingerprint scrubber restricts
an attacker’s ability to determine the operating system of a protected host. As an example, this
paper presents the implementation of a TCP scrubber that eliminates insertion and evasion
attacks—attacks that use ambiguities to subvert detection—on passive network-based intrusion
detection systems, while preserving high performance. The TCP scrubber is based on a novel,
simplified state machine that performs in a fast and scalable manner. The fingerprint scrubber is
built upon the TCP scrubber and removes additional ambiguities from flows that can reveal
implementation-specific details about a host’s operating system.
51. Structure and Texture Filling-In of Missing Image Blocks in Wireless
Transmission and Compression Applications
Abstract: Contemporary workflow management systems are driven by explicit process models,
i.e., a completely specified workflow design is required in order to enact a given workflow process.
Creating a workflow design is a complicated time-consuming process and, typically, there are2004/.Net
discrepancies between the actual workflow processes and the processes as perceived by the
management. Therefore, we have developed techniques for discovering workflow models. The
starting point for such techniques is a so-called “workflow log” containing information about the
workflow process as it is actually being executed. We present a new algorithm to extract a
process model from such a log and represent it in terms of a Petri net. However, we will also
demonstrate that it is not possible to discover arbitrary workflow processes. In this paper, we
explore a class of workflow processes that can be discovered. We show that the α-algorithm can
successfully mine any workflow represented by a so-called SWF-net.
53. An Agent Based Intrusion Detection, Response and Blocking using
signature method in Active Networks
Abstract: An ad hoc network is a self organized entity with a number of mobile nodes without
any centralized access point and also there is a topology control problem which leads to high
power consumption and no security, while routing the packets between mobile hosts.2006/Java
Authentication is one of the important security requirements of a communication network. The
common authentication schemes are not applicable in Ad hoc networks. In this paper, we propose
a secure communication protocol for communication between two nodes in ad hoc networks. This
is achieved by using clustering techniques. We present a novel secure communication framework
for ad hoc networks (SCP); which describes authentication and confidentiality when packets are
distributed between hosts with in the cluster and between the clusters. These cluster head nodes
execute administrative functions and network key used for certification. The cluster head nodes
(CHs) perform the major operations to achieve our SCP framework with help of Kerberos
authentication application and symmetric key cryptography technique which will be secure reliable
transparent and scalable and will have less over head.
55. ITP: An Image Transport Protocol for the Internet
Abstract: Images account for a significant and growing fraction of Web downloads. The
traditional approach to transporting images uses TCP, which provides a generic reliable in-order
byte stream abstraction, but which is overly restrictive for image data. We analyze the progression
of image quality at the receiver with time, and show that the in-order delivery abstraction provided
by a TCP-based approach prevents the receiver application from processing and rendering 2002/Java
portions of an image when they actually arrive. The end result is that an image is rendered in
bursts interspersed with long idle times rather than smoothly. This paper describes the design,
implementation, and evaluation of the image transport protocol (ITP) for image transmission over
loss-prone congested or wireless networks. ITP improves user-perceived latency using
application-level framing (ALF) and out-oforder application data unit (ADU) delivery, achieving
significantly better interactive performance as measured by the evolution of peak signal-to-noise
ratio (PSNR) with time at the receiver. ITP runs over UDP, incorporates receiver-driven selective
reliability, uses the congestion manager (CM) to adapt to network congestion, and is customizable
for specific image formats (e.g., JPEG and JPEG2000). ITP enables a variety of new receiver
post-processing algorithms such as error concealment that further improve the interactivity and
responsiveness of reconstructed images. Performance experiments using our implementation
across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of
image downloads at the receiver.
56. Hybrid Intrusion Detection with Weighted Signature Generation over
Anomalous Internet Episodes(HIDS)
Abstract: This paper reports the design principles and evaluation results of a new experimental
hybrid intrusion detection system (HIDS). This hybrid system combines the advantages of low
false-positive rate of signature-based intrusion detection system (IDS) and the ability of anomaly
detection system (ADS) to detect novel unknown attacks. By mining anomalous traffic episodes2007/J2EE
from Internet connections, we build an ADS that detects anomalies beyond the capabilities of
signature-based SNORT or Bro systems. A weighted signature generation scheme is developed
to integrate ADS with SNORT by extracting signatures from anomalies detected. HIDS extracts
signatures from the output of ADS and adds them into the SNORT signature database for fast and
accurate intrusion detection. By testing our HIDS scheme over real-life Internet trace data mixed
with 10 days of Massachusetts Institute of Technology/ Lincoln Laboratory (MIT/LL) attack data
set, our experimental results show a 60 percent detection rate of the HIDS, compared with 30
percent and 22 percent in using the SNORT and Bro systems, respectively. This sharp increase in
detection rate is obtained with less than 3 percent false alarms. The signatures generated by ADS
upgrade the SNORT performance by 33 percent. The HIDS approach proves the vitality of
detecting intrusions and anomalies, simultaneously, by automated data mining and signature
generation over Internet connection episodes.
57. Incremental deployment service of Hop by hop multicast routing protocol
Abstract: IP multicast is facing a slow take-off although it has been a hotly debated topic for
more than a decade. Many reasons are responsible for this status. Hence, the Internet is likely to
be organized with both unicast and multicast enabled networks. Thus, it is of utmost importance to
design protocols that allow the progressive deployment of the multicast service by supporting
unicast clouds. This paper presents HBH (hop-by-hop multicast routing protocol). HBH adopts the 2006/Java
source-specific channel abstraction to simplify address allocation and implements data distribution
using recursive unicast trees, which allow the transparent support of unicast- only routers. An
important original feature of HBH is its tree construction algorithm that takes into account the
unicast routing asymmetries. Since most multicast routing protocols rely on the unicast
infrastructure, the unicast asymmetries impact the structure of the multicast trees.We show
through simulation that HBH outperforms other multicast routing protocols in terms of the delay
experienced by the receivers and the bandwidth consumption of the multicast trees. Additionally,
we show that HBH can be incrementally deployed and that with a small fraction of HBH-enabled
routers in the network HBH outperforms application-layer multicast.
58. Network border patrol: preventing congestion collapse and promoting
fairness in the Internet
Abstract: The Internet's excellent scalability and robustness result in part from
2004/Java
the end-to-end nature of Internet congestion control. End-to-end congestion
control algorithms alone, however, are unable to prevent the congestion collapse
and unfairness created by applications that are unresponsive to network
congestion. To address these maladies, we propose and investigate a novel
congestion-avoidance mechanism called network border patrol (NBP). NBP
entails the exchange of feedback between routers at the borders of a network in
order to detect and restrict unresponsive traffic flows before they enter the
network, thereby preventing congestion within the network. Moreover, NBP is
complemented with the proposed enhanced core-stateless fair queueing
(ECSFQ) mechanism, which provides fair bandwidth allocations to competing
flows. Both NBP and ECSFQ are compliant with the Internet philosophy of
pushing complexity toward the edges of the network whenever possible.
Simulation results show that NBP effectively eliminates congestion collapse and
that, when combined with ECSFQ, approximately max-min fair bandwidth
allocations can be achieved for competing flows.
59. Application of BPCS steganography to wavelet compressed video
The idea of using neural networks for the purpose of recognizing handwritten digits is not a new
one. The inspiration for the architecture described here comes from articles written by two
separate authors. The first is Dr. Yann LeCun, who was an independent discoverer of the basic
backpropagation algorithm. Dr. LeCun hosts an excellent site on his research into neural
networks. In particular, you should view his "Learning and Visual Perception" section, which uses
animated GIFs to show results of his research. The MNIST database (which provides the
database of handwritten digits) was developed by him. I used two of his publications as primary
source materials for much of my work, and I highly recommend reading his other publications too
(they're posted at his site). Unlike many other publications on neural networks, Dr. LeCun's
publications are not inordinately theoretical and math-intensive; rather, they are extremely
readable, and provide practical insights and explanations.
61. Selective Encryption of Still Image
Abstract: In some applications, it is relevant to hide the content of a message when it enters an
insecure channel. The accepted view among professional cryptographers is that the encryption
algorithm should be published, whereas the key must be kept secret. In the field of image
cryptography, the focus has been put on steganography, and in particular on watermarking during
the last years. Watermarking, as opposed to steganography, has the additional requirement of 2000/VB,C,
robustness against possible image transformations. Watermarks are usually made invisible and
should not be detectable. In applications requiring transmission the image is first compressed, Java
because it saves bandwidth. Then the image is encrypted. There is a need for a technique called
selective encryption of compressed images with messages. Initially it aims of image encryption
and various methods. Usually, during encryption, all the information is encrypted. But this is not
mandatory, only a part of the image content will encrypted with messages in order to be able to
visualize the encrypted Images, although not with full precision. This concept leads to techniques
that can simultaneously provide security functions and an overall visual check which might be
suitable in some applications like searching through a shared image database, distributed
database for image storage etc., The principle of selective encryption is first applied to
compressed images with messages . This technique is proven not to interfere with the de-coding
process in the sense that it achieves a constant bit rate and that bit streams remain compliant to
the JPEG specifications.
62. An Acknowledgment-Based Approach For The Detection Of Routing
Misbehavior In MANETs
Abstract: We study routing misbehavior in MANETs (Mobile Ad Hoc Networks) in this paper. In
general, routing protocols for MANETs are designed based on the assumption that all participating2007/Java
nodes are fully cooperative. However, due to the open structure and scarcely available battery-
based energy, node misbehaviors may exist. One such routing misbehavior is that some selsh
nodes will participate in the route discovery and maintenance processes but refuse to forward
data packets. In this paper, we propose the 2ACK scheme that serves as an add-on technique for
routing schemes to detect routing misbehavior and to mitigate their adverse effect. The main idea
of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the
routing path. In order to reduce additional routing overhead, only a fraction of the received data
packets are acknowledged in the 2ACK scheme. Analytical and simulation results are presented
to evaluate the performance of the proposed scheme.
63. Neural Network-Based Face Detection
Abstract: We present a neural network-based upright frontal face detection system. A retinal
connected neural network examines small windows of an image, and decides whether each
window contains a face. The system arbitrates between multiple networks to improve performance1998/VC++
over a single network. We present a straightforward procedure for aligning positive face examples
for training. To collect negative examples, we use a bootstrap algorithm, which adds false
detections into the training set as training progresses. This eliminates the difficult task of manually
selecting non face training examples, which must be chosen to span the entire space of non face
images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further
improve the accuracy. Comparisons with several other state-of-the-art face detection systems are
presented; showing that our system has comparable performance in terms of detection and false-
positive rates.
64. Homogenous Network Control and Implementation
Abstract: This project, titled “Homogenous Network Control and Implementation”, presents a
way of developing integrity-preserved computer networks. The proposed generic network is based
on a detailed review and comparative analysis of ongoing research work in the field of
homogenous distributed systems and fault-tolerant systems. The presented network facilitates
easy sharing of information among the systems in the network by establishing a peer to peer2005/Java
network connection among all the systems.
Phase 1 deals with designing algorithms for summarizing and indexing text files. In case of
multimedia files the meta data files are created manually by the programmers. This phase also
involves algorithms for converting .doc and .pdf files to .txt format. In this system the searching is
not done at the run time as indexing is done before hand. 2005/Java
In Phase 2 folders would be replaced by a new construct called a library. A library is a virtual
folder that intelligently gathers information about files on the system and presents them to the
users. The concept of folders ceases to exist. Instead, the users are privileged enough to view
similar files together irrespective of their location in the physical memory. This enables retrieval of
files based on various parameters. This concept is named as CAROUSEL VIEW after the
proposed system with the same name to be launched by the Microsoft’s Windows Longhorn which
is a complete revolution in itself.
Phase 3 establishes a common peer to peer (P2P) protocol that enables remote querying over
other terminals in the network. This module allows this software to be used across the internet and
also over various LANs. In a nutshell , this project aims at creating a system which is highly
enhanced over the existing traditional ones and providing a user friendly environment.
66. XTC: A Practical Topology Control Algorithm for Ad-Hoc Networks
Abstract: The XTC AD-HOC network topology control algorithm introduced shows three main2004/Java
advantages over previously proposed algorithms. First, it is extremely simple and strictly local.
Second, it does not assume the network graph to be a unit disk graph; XTC proves correct also on
general weighted network graphs. Third, the algorithm does not require availability of node
position information. Instead, XTC operates with a general notion of order over the neighbors' link
qualities. In the special case of the network graph being a unit disk graph, the resulting topology
proves to have bounded degree, to be a planar graph, and - on average-case graphs - to be a
good spanner.
67. A near-optimal multicast scheme for mobile ad hoc networks using a hybrid
genetic algorithm
technique.
68. Mobile Agents In Distributed Multimedia Database Systems
Abstract: The size of networks is increasing rapidly and this fact is not straitened to the internet
alone. Many intra and inter–organization networks are affected by this trend, too. A side effect of
this growth is the increase of network traffic. This development leads to new challenges and we
have to think about new technologies. Mobile agent systems are one answer to these challenges.
Mobile agents are an emerging technology attracting interest from the fields of distributed
systems, information retrieval, electronic commerce and artificial intelligence.
2004/Java
A mobile agent is an executing program that can migrate during execution from machine to
machine in a heterogeneous network. On each machine, the agent interacts with stationary
service agents and other resources to accomplish its task, returning to its home site with a final
result when that task is finished. Mobile agents are particularly attractive in distributed information-
retrieval applications. By moving to the location of an information resource, the agent can search
the resource locally, eliminating the transfer of intermediate results across the network and
reducing end-to-end latency. Mobile agents are goal-oriented, can communicate with other
agents, and can continue to operate even after the machine that launched them has been
removed from the network.
The mobile feature enables the agent to travel to the host where the data are physically stored.
This is obviously of great interest in a distributed multimedia database systems where we have in
most cases large binary objects. This Project integrates mobile agent technology in a distributed
database system. The advantage of this approach is the combination of mobile agent features
(e.g. autonomy, mobility, enhancement of functionality) and database services such as recovery,
transaction handling, concurrency and security. This projects aims at facilitating storage and
retrieval of multimedia data from the distributed multimedia database using mobile agents based
on host database which will provide the result to the user upon request.
69. Image Stream Transfer Using Real-Time Transmission Protocol
Abstract: Images account for a significant and growing fraction of Web downloads. The
traditional approach to transporting images uses TCP, which provides a generic reliable in-order
byte-stream abstraction, but which is overly restrictive for image data. We analyze the progression
of image quality at the receiver with time, and show that the in-order delivery abstraction provided
by a TCP-based approach prevents the receiver application from processing and rendering
portions of an image when they actually arrive. The end result is that an image is rendered in 2006/Java
bursts interspersed with long idle times rather than smoothly. This paper describes the design,
implementation, and evaluation of the image transport protocol (ITP) for image transmission over
loss-prone congested or wireless networks. ITP improves user-perceived latency using
application-level framing (ALF) and out-of-order application data unit (ADU) delivery, achieving
significantly better interactive performance as measured by the evolution of peak signal-to-noise
ratio (PSNR) with time at the receiver. ITP runs over UDP, incorporates receiver-driven selective
reliability, uses the congestion manager (CM) to adapt to network congestion, and is customizable
for specific image formats (e.g., JPEG and JPEG2000). ITP enables a variety of new receiver
post-processing algorithms such as error concealment that further improve the interactivity and
responsiveness of reconstructed images. Performance experiments using our implementation
across a variety of loss conditions demonstrate the benefits of ITP in improving the interactivity of
image downloads at the receiver.
70. Neural Networks for Unicode Optical Character Recognition
Abstract: The central objective of this project is demonstrating the capabilities of Artificial
Neural Network implementations in recognizing extended sets of optical language symbols. The
applications of this technique range from document digitizing and preservation to handwritten text
recognition in handheld devices. The classic difficulty of being able to correctly recognize even
typed optical language symbols is the complex irregularity among pictorial representations of the
same character due to variations in fonts, styles and size. This irregularity undoubtedly widens
when one deals with handwritten characters.
Hence the conventional programming methods of mapping symbol images into matrices,C# .Net
analyzing pixel and/or vector data and trying to decide which symbol corresponds to which
character would yield little or no realistic results. Clearly the needed methodology will be one that
can detect ‘proximity’ of graphic representations to known symbols and make decisions based on
this proximity. To implement such proximity algorithms in the conventional programming one
needs to write endless code, one for each type of possible irregularity or deviation from the
assumed output either in terms of pixel or vector parameters, clearly not a realistic fare. An
emerging technique in this particular application area is the use of Artificial Neural Network
implementations with networks employing specific guides (learning rules) to update the links
(weights) between their nodes. Such networks can be fed the data from the graphic analysis of the
input picture and trained to output characters in one or another form. Specifically some network
models use a set of desired outputs to compare with the output and compute an error to make use
of in adjusting their weights. Such learning rules are termed as Supervised Learning.
One such network with supervised learning rule is the Multi-Layer Perceptron (MLP) model. It
uses the Generalized Delta Learning Rule for adjusting its weights and can be trained for a set of
input/desired output values in a number of iterations. The very nature of this particular model is
that it will force the output to one of nearby values if a variation of input is fed to the network that it
is not trained for, thus solving the proximity issue. Both concepts will be discussed in the
introduction part of this report.The project has employed the MLP technique mentioned and
excellent results were obtained for a number of widely used font types. The technical approach
followed in processing input images, detecting graphic symbols, analyzing and mapping the
symbols and training the network for a set of desired Unicode characters corresponding to the
input images are discussed in the subsequent sections. Even though the implementation might
have some limitations in terms of functionality and robustness, the researcher is confident that it
fully serves the purpose of addressing the desired objectives.