You are on page 1of 13

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.

in Contact: 9448847874

MATLAB PROJECT TITLES 2013-2014


317. Non locally Centralized Sparse Representation For Image Restoration Abstract: The sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred and/or downsampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal selfsimilarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The socalled nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm. Keywords: Sparse representation, image restoration, nonlocal similarity 318. Sparse Representation Based Image Interpolation With Nonlocal Autoregressive Modeling Abstract: Sparse representation has proven to be a promising approach to image superresolution, where the low resolution (LR) image is usually modeled as the down-sampled version of its high resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such case, however, the conventional sparse representation models (SRM) become less effective because the data fidelity term will fail to constrain the image local structures. In natural images, fortunately, the many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper we incorporate the image nonlocal selfsimilarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrated that the proposed NARM based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in term of PSNR as well as perceptual quality metrics such as SSIM and FSIM. Index Terms: Image interpolation, super-resolution, sparse representation, nonlocal autoregressive model.
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

319.Removing Atmospheric Turbulence Via Space-Invariant De convolution Abstract: To correct geometric distortion and reduce space and time-varying blur, a new approach is proposed in this paper capable of restoring a single high-quality image from a given image sequence distorted by atmospheric turbulence. This approach reduces the space and time-varying deblurring problem to a shift invariant one. It first registers each frame to suppress geometric deformation through B-spline-based nonrigid registration. Next, a temporal regression process is carried out to produce an image from the registered frames, which can be viewed as being convolved with a space invariant near-diffraction-limited blur. Finally, a blind deconvolution algorithm is implemented to deblur the fused image, generating a final output. Experiments using real data illustrate that this approach can effectively alleviate blur and distortions, recover details of the scene, and significantly improve visual quality. Index Terms: Image restoration, atmospheric turbulence, nonrigid image registration, point spread function, sharpness metric 320. SAIF-Ly Boost De noising Performance Abstract: Spatial domain image filters (e.g. bilateral filter, NLM, LARK) have achieved great success in denoising. However, their overall performance has not generally surpassed the leading transform domain based filters (such as BM3D). One important reason is that spatial domain filters lack an efficient way to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose SAIF1 (Spatially Adaptive Iterative Filtering), a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, while the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e. mean-squared error). In exploiting the estimated local SNR, we also present a new risk estimator which is different than the often-employed SURE method and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithms sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-theart results under both simulated and practical conditions. Index Terms: Image denoising, spatial domain filter, risk estimator, SURE, pixel aggregation 321. Image Signature: Highlighting Sparse Salient Regions Abstract: We introduce a simple image descriptor referred to as the image signature. We show, within the theoretical framework of sparse signal mixing, that this quantity spatially approximates the foreground of an image. We experimentally investigate whether this approximate foreground overlaps with visually conspicuous image locations by developing a
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

saliency algorithm based on the image signature. This saliency algorithm predicts human fixation points best among competitors on the Bruce and Tsotsos [1] benchmark data set and does so in much shorter running time. In a related experiment, we demonstrate with a change blindness data set that the distance between images induced by the image signature is closer to human perceptual distance than can be achieved using other saliency algorithms, pixel-wise, or GIST [2] descriptor methods. Index Terms: Saliency, visual attention, change blindness, sign function, sparse signal analysis. 322. Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images. Abstract: Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete, and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test and also for image recovery. In the context of compressive sensing, significant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, defined by measuring a subset of image pixels, uniformly selected at random. Spatial interrelationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature. Index Terms: Bayesian nonparametrics, compressive sensing, dictionary learning, factor analysis, image denoising, image interpolation, sparse coding. 323. Patch-Based Near-Optimal Image Denoising Abstract: In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively. Index Terms: Denoising bounds, image clustering, image denoising, linear minimum meansquared-error (LMMSE) estimator, Wiener filter.

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

324 .Accelerated Hypothesis Generation for Multistructure Data via Preference Analysis. Abstract: Random hypothesis generation is integral to many robust geometric model fitting techniques. Unfortunately, it is also computationally expensive, especially for higher order geometric models and heavily contaminated data. We propose a fundamentally new approach to accelerate hypothesis sampling by guiding it with information derived from residual sorting. We show that residual sorting innately encodes the probability of two points having arisen from the same model, and is obtained without recourse to domain knowledge (e.g., keypoint matching scores) typically used in previous sampling enhancement methods. More crucially, our approach encourages sampling within coherent structures and thus can very rapidly generate all-inlier minimal subsets that maximize the robust criterion. Sampling within coherent structures also affords a natural ability to handle multistructure data, a condition that is usually detrimental to other methods. The result is a sampling scheme that offers substantial speed-ups on common computer vision tasks such as homography and fundamental matrix estimation. We show on many computer vision data, especially those with multiple structures, that ours is the only method capable of retrieving satisfactory results within realistic time budgets. Index Terms: Geometric model fitting, robust estimation, hypothesis generation, residual sorting, multiple structures. 325. BM3D Frames and Variational Image Deblurring. Abstract: A family of the block matching 3-D (BM3D) algorithms for various imaging problems has been recently proposed within the framework of nonlocal patchwise image modeling [1], [2]. In this paper, we construct analysis and synthesis frames, formalizing BM3D image modeling, and use these frames to develop novel iterative deblurring algorithms. We consider two different formulations of the deblurring problem, i.e., one given by the minimization of the single-objective function and another based on the generalized Nash equilibrium (GNE) balance of two objective functions. The latter results in the algorithm where deblurring and denoising operations are decoupled. The convergence of the developed algorithms is proved. Simulation experiments show that the decoupled algorithm derived from the GNE formulation demonstrates the best numerical and visual results and shows superiority with respect to the state of the art in the field, confirming a valuable potential of BM3D-frames as an advanced image modeling tool. Index Terms: Deblurring, representations. frames, image modeling, image reconstruction, sparse

326. Re-Initialization Free Level Set Evolution Via Reaction Diffusion Abstract: This paper presents a novel reaction-diffusion (RD) method for implicit active contours, which is completely free of the costly re-initialization procedure in level set evolution (LSE). A diffusion term is introduced into LSE, resulting in a RD-LSE equation, to which a
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

piecewise constant solution can be derived. In order to have a stable numerical solution of the RD based LSE, we propose a two-step splitting method (TSSM) to iteratively solve the RD-LSE equation: first iterating the LSE equation, and then solving the diffusion equation. The second step regularizes the level set function obtained in the first step to ensure stability, and thus the complex and costly re-initialization procedure is completely eliminated from LSE. By successfully applying diffusion to LSE, the RD-LSE model is stable by means of the simple finite difference method, which is very easy to implement. The proposed RD method can be generalized to solve the LSE for both variational level set method and PDE-based level set method. The RD-LSE method shows very good performance on boundary anti-leakage, and it can be readily extended to high dimensional level set method. The extensive and promising experimental results on synthetic and real images validate the effectiveness of the proposed RD-LSE approach. Index Terms: Level set, reaction-diffusion, active contours, image segmentation, PDE, variational method 327. Monogenic Binary Coding: An Efficient Local Feature Extraction Approach To Face Recognition Abstract: Local feature based face recognition (FR) methods, such as Gabor features encoded by local binary pattern, could achieve state-of-the-art FR results in large-scale face databases such as FERET and FRGC. However, the time and space complexity of Gabor transformation are too high for many practical FR applications. In this paper, we propose a new and efficient local feature extraction scheme, namely monogenic binary coding (MBC), for face representation and recognition. Monogenic signal representation decomposes an original signal into three complementary components: amplitude, orientation and phase. We encode the monogenic variation in each local region and monogenic feature in each pixel, and then calculatebthe statistical features (e.g., histogram) of the extracted local features. The local statistical features extracted from the complementary monogenic components (i.e., amplitude, orientation and phase) are then fused for effective FR. It is shown that the proposed MBC scheme has significantly lower time and space complexity than the Gabor-transformation based local feature methods. The extensive FR experiments on four large scale databases demonstrated the effectiveness of MBC, whose performance is competitive with and even better than stateof-the-art local feature based FR methods. Keywords: monogenic signal analysis, monogenic binary coding, face recognition, LBP, Gabor filtering 328. Monotonic Regression: A New Way For Correlating Subjective And Objective Ratings In Image Quality Research Abstract: To assess the performance of image quality metrics (IQMs), some regressions, such as logistic regression and polynomial regression, are used to correlate objective ratings with subjective scores. However, some defects in optimality are shown in these regressions. In this
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

correspondence, monotonic regression (MR) is found to be an effective correlation method in the performance assessment of IQMs. Both theoretical analysis and experimental results have proven that MR performs better than any other regression. We believe that MR could be an effective tool for performance assessment in the IQM research. Index Terms: Image quality assessment, image quality metric (IQM), metric performance, monotonic regression (MR). 329. Demonstration Of Real-Time Spectrum Sensing For Cognitive Radio Abstract: Spectrum sensing detects the availability of the radio frequency spectrum in a realtime fashion, which is essential and vital to cognitive radio. The requirement for real-time processing indeed poses challenges on implementing spectrum sensing algorithms. Trade-off between the complexity and the effectiveness of spectrum sensing algorithms should be taken into consideration. In this paper, a fast Fourier transform (FFT) based spectrum sensing algorithm called FAR is introduced. It is the beauty of the algorithm that the decision variable is insensitive to noise level. Parameter selection for the algorithm is considered as well, toward minimizing computational complexity. A small form factor (SFF) software defined radio (SDR) development platform (DP) is employed to implement a spectrum sensing receiver with FAR algorithm. Performance of FAR algorithm is evaluated on the SFF SDR DP, and real-time spectrum sensing is demonstrated. FAR algorithm is friendly to hardware implementation and it is effective to detect signals at low SNR. 330. ML Estimation Of Time And Frequency Offset In OFDM Systems Abstract: We present the joint maximum likelihood (ML) symbol-time and carrier-frequency offset estimator in orthogonal frequency-division multiplexing (OFDM) systems. Redundant information contained within the cyclic prefix enables this estimation without additional pilots. Simulations show that the frequency estimator may be used in a tracking mode and the time estimator in an acquisition mode. 331. Efficient Encoding Of Low-Density Parity-Check Codes Abstract: Low-density parity-check (LDPC) codes can be considered serious competitors to turbo codes in terms of performance and complexity and they are based on a similar philosophy: constrained random code ensembles and iterative decoding algorithms. In this paper, we consider the encoding problem for LDPC codes. More generally, we consider the encoding problem for codes specified by sparse parity-check matrices. We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders. For the (3 6)-regular LDPC code, for example, the complexity of encoding is essentially quadratic in the block length. However, we show that the associated coefficient can be made quite small, so that encoding codes even of length 100 000 is still quite practical. More importantly, we will show that optimized codes actually admit linear time encoding.

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

Index Terms: Binary erasure channel, decoding, encoding, parity check, random graphs, sparse matrices, turbo codes. 332. Multi-User Diversity Vs. Accurate Channel State Information In MIMO Downlink Abstract: In a multiple transmit antenna, single antenna per receiver downlink channel with limited channel state feedback, we consider the following question: given a constraint on the total system-wide feedback load, is it preferable to get low-rate/coarse channel feedback from a large number of receivers or high-rate/high-quality feedback from a smaller number of receivers? Acquiring feedback from many receivers allows multi-user diversity to be exploited, while high-rate feedback allows for very precise selection of beam forming directions. We show that there is a strong preference for obtaining high-quality feedback, and that obtaining nearperfect channel information from as many receivers as possible provides a significantly larger sum rate than collecting a few feedback bits from a large number of users. 333. Sum Power Iterative Water-Filling For Multi-Antenna Gaussian Broadcast Channels Abstract: In this correspondence, we consider the problem of maximizing sum rate of a multiple-antenna Gaussian broadcast channel (BC). It was recently found that dirty-paper coding is capacity achieving for this channel. In order to achieve capacity, the optimal transmission policy (i.e., the optimal transmit covariance structure) given the channel conditions and power constraint must be found. However, obtaining the optimal transmission policy when employing dirty-paper coding is a computationally complex non convex problem. We use duality to transform this problem into a well-structured convex multiple-access channel (MAC) problem. We exploit the structure of this problem and derive simple and fast iterative algorithms that provide the optimum transmission policies for the MAC, which can easily be mapped to the optimal BC policies. Index Terms: Broadcast channel, dirty-paper coding, duality, multipleaccess channel (MAC), multiple-input multiple-output (MIMO), systems. 334. On Optimal Power Control For Delay-Constrained Communication Over Fading Channels Abstract: In this paper, we study the problem of optimal power control for delay-constrained communication over fading channels. Our objective is to find a power control law that optimizes the link layer performance, specifically, minimizes delay bound violation probability (or equivalently, the packet drop probability), subject to constraints on average power, arrival rate, and delay bound. The transmission buffer size is assumed to be finite; hence, when the buffer is full, there will be packet drop. The fading channel under our study has a continuous state, e.g., Rayleigh fading. Since directly solving the power control problem (which optimizes the link layer performance) is particularly challenging, we decompose it into three sub problems, and solve the three sub-problems iteratively; we call the resulting scheme Joint Queue Length Aware (JQLA) power control, which produces a local optimal solution to the three sub problems. We prove that the solution that simultaneously solves the three subVENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

problems is also an optimal solution to the optimal power control problem. Simulation results show that the JQLA scheme achieves superior performance over the time domain water filling and the truncated channel inversion power control. E.g., JQLA achieves 10 dB gain at packet drop probability of 103, over the time domain water filling power control. Index Terms: Delay-constrained communication, power control, queuing analysis, delay bound violation probability, packet drop probability. 336. An Improved Algorithm For Blind Reverberation Time Estimation Abstract: An improved algorithm for the estimation of the reverberation time (RT) from reverberant speech signals is presented. This blind estimation of the RT is based on a simple statistical model for the sound decay such that the RT can be estimated by means of a maximum-likelihood (ML) estimator. The proposed algorithm has a significantly lower computational complexity than previous ML-based algorithms for RT estimation. This is achieved by a down sampling operation and a simple pre-selection of possible sound decays. The new algorithm is more suitable to track time-varying RTs than related approaches. In addition, it can also estimate the RT in the presence of (moderate) background noise. The proposed algorithm can be employed to measure the RT of rooms from sound recordings without using a dedicated measurement setup. Another possible application is its use within speech de reverberation systems for hands-free devices or digital hearing aids. Index Terms: reverberation time, blind estimation, low complexity, speech dereverberation 337. Fast And Accurate Sequential Floating Forward Feature Selection With the Bayes Classifier Applied To Speech Emotion Recognition Abstract: This paper addresses subset feature selection performed by the sequential floating forward selection (SFFS). The criterion employed in SFFS is the correct classification rate of the Bayes classifier assuming that the features obey the multivariate Gaussian distribution. A theoretical analysis that models the number of correctly classified utterances as a hyper geometric random variable enables the derivation of an accurate estimate of the variance of the correct classification rate during cross-validation. By employing such variance estimate, we propose a fast SFFS variant. Experimental findings on Danish emotional speech (DES) and Speech Under Simulated and Actual Stress (SUSAS) databases demonstrate that SFFS computational time is reduced by 50% and the correct classification rate for classifying speech into emotional states for the selected subset of features varies less than the correct classification rate found by the standard SFFS. Although the proposed SFFS variant is tested in the framework of speech emotion recognition, the theoretical results are valid for any classifier in the context of any wrapper algorithm. Key words: Bayes classifier, cross-validation, variance of the correct classification rate of the Bayes classifier, feature selection, wrappers

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

338. Hybrid De Algorithm With Adaptive Crossover Operator For Solving Real-World Numerical Optimization Problems Abstract: In this paper, the results for the CEC 2011 Competition on testing evolutionary algorithms on real world optimization problems using a hybrid differential evolution algorithm are presented. The proposal uses a local search routine to improve convergence and an adaptive crossover operator. According to the obtained results, this algorithm shows to be able to find competitive solutions with reported results. Index Terms: Differential Evolution algorithm, parameter selection, CEC competition. 339. Real-Time Compressive Tracking Abstract: It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. While much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, these mis-aligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from the multi-scale image feature space with data-independent basis. Our appearance model employs nonadaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is adopted to efficiently extract the features for the appearance model. We compress samples of foreground targets and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. The proposed compressive tracking algorithm runs in realtime and performs favorably against state-of-the-art algorithms on challenging sequences in terms of efficiency, accuracy and robustness. 340. An Efficient Algorithm For Level Set Method Preserving Distance Function Abstract: The level set method is a popular technique for tracking moving interfaces in several disciplines including computer vision and fluid dynamics. However, despite its high flexibility, the original level set method is limited by two important numerical issues. Firstly, the level set method does not implicitly preserve the level set function as a distance function, which is necessary to estimate accurately geometric features s.a. the curvature or the contour normal. Secondly, the level set algorithm is slow because the time step is limited by the standard CFL condition, which is also essential to the numerical stability of the iterative scheme. Recent advances with graph cut methods and continuous convex relaxation provide powerful alternatives to the level set method for image processing problems because they are fast, accurate and guaranteed to find the global minimizer independently to the initialization. These
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

recent techniques use binary functions to represent the contour rather than distance functions, which are usually considered for the level set method. However, the binary function cannot provide the distance information, which can be essential for some applications s.a. the surface reconstruction problem from scattered points and the cortex segmentation problem in medical imaging. In this paper, we propose a fast algorithm to preserve distance functions in level set methods. Our algorithm is inspired by recent efficient `1 optimization techniques, which will provide an efficient and easy to implement algorithm. It is interesting to note that our algorithm is not limited by the CFL condition and it naturally preserves the level set function as a distance function during the evolution, which avoids the classical re-distancing problem in level set methods. We apply the proposed algorithm to carry out image segmentation, where our methods proves to be 5 to 6 times faster than standard distance preserving level set techniques. We also present two applications where preserving a distance function is essential. Nonetheless, our method stays generic and can be applied to any level set methods that require the distance information. Index Terms: Level set, image segmentation, surface reconstruction, signed distance function, numerical scheme, splitting. 341. Efficient Misalignment-Robust Representation For Real-Time Face Recognition Abstract: Sparse representation techniques for robust face recognition have been widely studied in the past several years. Recently face recognition with simultaneous misalignment, occlusion and other variations has achieved interesting results via robust alignment by sparse representation (RASR). In RASR, the best alignment of a testing sample is sought subject by subject in the database. However, such an exhaustive search strategy can make the time complexity of RASR prohibitive in large-scale face databases. In this paper, we propose a novel scheme, namely misalignment robust representation (MRR), by representing the misaligned testing sample in the transformed face space spanned by all subjects. The MRR seeks the best alignment via a two-step optimization with a coarse-to-fine search strategy, which needs only two deformation-recovery operations. Extensive experiments on representative face databases show that MRR has almost the same accuracy as RASR in various face recognition and verification tasks but it runs tens to hundreds of times faster than RASR. The running time of MRR is less than 1 second in the large-scale Multi-PIE face database, demonstrating its great potential for real-time face recognition. 342. Robust Point Matching Revisited: A Concave Optimization Approach Abstract: The well-known robust point matching (RPM) method uses deterministic annealing for optimization, and it has two problems. First, it cannot guarantee the global optimality of the solution and tends to align the centers of two point sets. Second, deformation needs to be regularized to avoid the generation of undesirable results. To address these problems, in this paper we show that the energy function of RPM can be reduced to a concave function with very few non-rigid terms after eliminating the transformation variables and applying linear transformation; we then propose to use concave optimization technique to minimize the
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

resulting energy function. The proposed method scales well with problem size, achieves the globally optimal solution, and does not need regularization for simple transformations such as similarity transform. Experiments on synthetic and real data validate the advantages of our method in comparison with state-of-the-art methods. 343. Canny Edge Detection Enhancement By Scale Multiplication

344. Robust Object Tracking Using Joint Color-Texture Histogram Abstract: A novel object tracking algorithm is presented in this paper by using the joint color texture histogram to represent a target and then applying it to the mean shift framework. Apart from the conventional color histogram features, the texture features of the object are also extracted by using the local binary pattern (LBP) technique to represent the object. The major uniform LBP patterns are exploited to form a mask for joint color-texture feature selection. Compared with the traditional color histogram based algorithms that use the whole target region for tracking, the proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experimental results validate that the proposed method improves greatly the tracking accuracy and efficiency with fewer mean shift iterations than standard mean shift tracking. It can robustly track the target under complex scenes, such as similar target and background appearance, on which the traditional color based schemes may fail to track. Keywords: Object tracking; mean shift; local binary pattern; color histogram. 345. Distance Regularized Level Set Evolution And Its Application To Image Segmentation Abstract: Level set methods have been widely used in image processing and computer vision. In conventional level set formulations, the level set function typically develops irregularities during its evolution, which may cause numerical errors and eventually destroy the stability of the evolution. Therefore, a numerical remedy, called re initialization, is typically applied to periodically replace the degraded level set function with a signed distance function. However,
VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

the practice of re initialization not only raises serious problems as when and how it should be performed, but also affects numerical accuracy in an undesirable way. This paper proposes a new variational level set formulation in which the regularity of the level set function is intrinsically maintained during the level set evolution. The level set evolution is derived as the gradient flow that minimizes an energy functional with a distance regularization term and an external energy that drives the motion of the zero level set toward desired locations. The distance regularization term is defined with a potential function such that the derived level set evolution has a unique forward-and-backward (FAB) diffusion effect, which is able to maintain a desired shape of the level set function, particularly a signed distance profile near the zero level set. This yields a new type of level set evolution called distance regularized level set evolution (DRLSE). The distance regularization effect eliminates the need for reinitialization and thereby avoids its induced numerical errors. In contrast to complicated implementations of conventional level set formulations, a simpler and more efficient finite difference scheme can be used to implement the DRLSE formulation. DRLSE also allows the use of more general and efficient initialization of the level set function. In its numerical implementation, relatively large time steps can be used in the finite difference scheme to reduce the number of iterations, while ensuring sufficient numerical accuracy. To demonstrate the effectiveness of the DRLSE formulation, we apply it to an edge-based active contour model for image segmentation, and provide a simple narrowband implementation to greatly reduce computational cost. Index Terms: Forward and backward diffusion, image segmentation, level set method, narrowband, reinitialization. 346. Minimization Of Region-Scalable Fitting Energy For Image Segmentation Abstract: Intensity inhomogeneities often occur in real-world images and may cause considerable difficulties in image segmentation. In order to overcome the difficulties caused by intensity inhomogeneities, we propose a region-based active contour model that draws upon intensity information in local regions at a controllable scale. A data fitting energy is defined in terms of a contour and two fitting functions that locally approximate the image intensities on the two sides of the contour. This energy is then incorporated into a variational level set formulation with a level set regularization term, from which a curve evolution equation is derived for energy minimization. Due to a kernel function in the data fitting term, intensity information in local regions is extracted to guide the motion of the contour, which thereby enables our model to cope with intensity inhomogeneity. In addition, the regularity of the level set function is intrinsically preserved by the level set regularization term to ensure accurate computation and avoids expensive reinitialization of the evolving level set function. Experimental results for synthetic and real images show desirable performances of our method. Index Terms: Image segmentation, intensity inhomogeneity, level set method, region-scalable fitting energy, variational method.

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

347. Motion Tracking Abstract: The motion tracking task was decomposed into two independent subproblems. The _rst is to detect foreground objects on a frame-wise basis, by labelling each pixel in an image frame as either foreground or background. The second is to couple object observations at di_erent points in a sequence to yield the object's motion trajectory. 348. A Level Set Method For Image Segmentation In The Presence Of Intensity In homogeneities With Application To MRI Abstract: Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results. Index Terms: Bias correction, image segmentation, intensity inhomogeneity, level set, MRI.

VENSOFT Technologies, www.ieeedeveloperslabs.in Email: info@ieeedeveloperslabs.in Contact: 9448847874

You might also like