You are on page 1of 7

International Journal of Advances in Engineering & Technology, July 2012.

IJAET ISSN: 2231-1963

ACOUSTIC ECHO CANCELLATION USING INDEPENDENT COMPONENT ANALYSIS


Rohini Korde, Shashikant Sahare
Cummins College of Engineering for Women, Pune, India

A BSTRACT
This paper proposes a new technique of using a noise-suppressing nonlinearity in the adaptive filter error feedback-loop of an acoustic echo canceler (AEC) based on the normalised least mean square (NLMS) algorithm when there is interference at the near end. In particular, the error enhancement technique is wellfounded in the information-theoretic sense and has strong ties to independent component analysis (ICA), which is the basis for blind source separation (BSS) that permits unsupervised adaptation in the presence of multiple interfering signals. The single-channel AEC problem can be viewed as a special case of semi-blind source separation (SBSS) where one of the source signals is partially known, i.e., the far-end microphone signal that generates the near-end acoustic echo. The system approach to robust AEC will be motivated, where a proper integration of the LMS algorithm with the ERN into the AEC system allows for continuous and stable adaptation even during double talk without precise estimation of the signal statistics.

K EYWORDS :

Acoustic echo cancellation, error nonlinearity, independent component analysis, residual echo enhancement, semi-blind source separation.

I.

INTRODUCTION

The adaptive filter technique has been applied to many system identification problems in communications and noise control.The most popular algorithms, i.e., LMS and RLS are based on the idea that the effect of additive noise is to be suppressed in the least square sense. But if the is nonGaussian, the performance of the above algorithms degrade significantly. On the other hand, in recent years, independent component analysis (ICA) has been attracting much attention in many fields.The residual echo enhancement procedure is used to counter the effect of additive noise during the adaptation of acoustic echo cancellation (AEC) based on normalized least-mean square (NLMS), which is preferred over the LMS counterpart for robustness to changes in the reference signal magnitude[1]. The procedure is illustrated in Figure1 through the application of memory less non linearity to the residual echo e.

Figure 1:Adaptive filtering with linear or nonlinear distortion on the true, noise-free acoustic echo d(n).

436

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963
+1 = +
|| ||

(1)

Form equation (1) , x(n) = [x(n), x(n 1), , x(n L + 1)]T is the reference signal vector of length L at time n, w(n) = [w0(n),w1 (n), ,w L-1(n)]T is the filter coefficient vector of the same length, T is the transposition operator, is the adaptation step-size parameter and f is the noise-suppressing nonlinearity. e(n) = d(n) d(n) = e(n) + (n) (2) is the observed estimation error. The filter output is d=wTxis an estimate of desired signal d = hTx distorted by the noise , i.e., d = d + . By the true error or residual echo, i.e. e(n), we can find out estimation error in a noise free situation. The additive noise may be a near-end speech in Figure 1 (i.e., double-talk) or any ambient background noise. It was shown in [2] that such a procedure is optimal in terms of the steady-state mean square error (MSE) or the mean square deviation (MSD) by using adaptive step size procedure. Error Recovery Nonlinearity (ERN) f() is applied to the filter estimation error e(n). The use of an error nonlinearity to modify the convergence behavior of the LMS algorithm has previously been addressed by many other researchers.Furthermore, the ERN can also be viewed as a function that controls the step-size for sub-optimal conditions reflected in the error statistics when the signals are no longer Gaussian distributed, as most often is the case in reality, and it may be combined with other existing noise-robust schemes to improve the overall performance of the LMS algorithm.Signal enhancement gives approach to enhance the residual echo. By maintaining regularization parameter large enough to keep the NLMS algorithm from diverging when the signalto-noise ratio (SNR) between the reference signal x and the noise is very small. The combined approach enables the AEC to beperformed continuously in the presence of both ambient noise anddouble-talk without the double-talk detection (DTD) or the voiceactivity detection (VAD) procedure for subsequent freezing of thefilter adaptation when the system encounters ill-conditioned situations. In fact, the technique is well-founded in an information-theoretic sense and has strong ties to the algorithms based on the independent component analysis [3] (ICA). It will become clear that even for the single-channel AEC, a combination of the LMS algorithm and the error enhancement procedure is a specific case of the so called semi-blind source separation (SBSS) based on ICA, which allows the recovery of a target signal among interferences when some of the source signals are available [4]. The two traditional performance measures for AEC are the echo return loss enhancement (ERLE) which measures the performance of MSE: ERLE(dB) =

10log

| |

| | || || || ||

(3) (4)

Misalignment(dB) = 10 log

Normalized misalignment measures the performance of the MSD.A more objective MSE performance measure is what we refer to as the true ERLE (tERLE) is given by , tERLE (dB) = 10log
| | | |

(5)

II.

NOISE-SUPPRESSING NONLINEARITIES

Given the additive relationship for the observed estimation error e as in (2) and assuming the noise is statistically independent from the true estimation error e, the minimum mean square error (MMSE) Bayesian estimation procedure can be used to define a memoryless nonlinearity for the error enhancement procedure. In particular, ife and are zero-mean Gaussian distributed.At the heart of the Bayesian estimation is the conditional probability given by the Bayes formula,

437

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963
= (6)

The MMSE estimate is obtained by minimizing the expectation of the residual E E{(- )2|e} with respect to the estimate conditioned on the observation e, resulting in = E{ |e}. }. MMSE (e) = the score function be defined as, = = (8) (7)

logps(s)is the probability density function (PD of the noisy estimation error e = + when either is (PDF) or is assumed to be Gaussian is Gaussian-distributed with the variance 2.where is the derivative operator.(8) measures the relative rate at which (8) changes at a value s. Let us consider three random variables s, t, and u, wheres = t + u to reflect the additive distortion model in (2 This paper gives (2). connection to optimal Error Nonlinearity ,when the observed adaptive filter error is modeled as e = e ptimal hen + , where e is the true, zero-mean Gaussiandistributederror, the optimal error nonlinea mean nonlinearity for the LMS algorithm that minimizes the steady steady-stateMSE is [11] = = (9) which is simply the score function ( in terms of e.ERN is optimal in the LMS-sense for any (8) sense distribution of the local noise . Non-Gaussianity of Filter Estimation Error is given when, the Gaussianity of e is usually assumed for Gaussianity the LMS algorithm when the filter length is long enoughby the argument of the central limit theorem [11]. The error enhancement technique may be interpretedas a generalization of the adaptive step ror step-size method for any probability distribution of e or . That is, thestep-size should be adjusted nonlinearly size as a function of the signal level for non non-Gaussian signals evenwhen their statistics remain stationary. enwhen The ERN technique enables the incorporation of the statisticalsource information for linear MSE MSEbased adaptive filtering. In fact, the ERN suppresses the noise signal better than the Wiener enhancement rule when either e or isnon-Gaussian distributed In any case, mostof the signals Gaussian encountered in real life are not distributed as Gaussian, e.g., the speech signal distributionis widely regarded to be super-Gaussian in either the time or the frequency domain [ Gaussian [12]. This leadsnaturally to the role of ICA as discussed in the next section.

III.

ROBUST ADAPTATION THROUGH ICA

A single-channel AEC setup shown in Figure.2 (including RES for the sake of a complete AEC channel system) can be viewed as a special case of the source separation problem for the recovery of the near nearend signals when some of the source signals are partially known i.e., the far-end (reference)signal. known, end

Figure 2. Single-channel AEC with the near channel near-end noise (local speech) added to the desired response d (acoustic echo).

438

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963
By following the source separation convention, the mixing system in Figure 2 can be modeled linearly as, = 1 0 1 (10)

and the corresponding de-mixing (AEC) system as = 0 (11)

Then the natural gradient (NG) algorithm that maximizes the independence between e and x is given by [5]. w +1 = +1 = + + [(1 1 ] (12) (13)

for some adaptation step-sizes 1 and 2. The usual MSEbased system identification is obtained when a=1 and w = w so that e = d wT x = e, where the NG algorithm simplifies to, w +1 = + (14)

which can be interpreted as the ICA-based LMS algorithm. Many other interpretations are as follows: It is well known that the LMS algorithm orthogonalizes( decorrelates, assuming zero mean) e and x on average through the second-order statistics (SOS), i.e., E{ex}= 0. The application of e to e during the LMS optimization procedure attempts to make e and x independent, which means decorrelation through the second and all higher-order statistics (HOS). Since the statistical independence implies the second-order decorrelation but not vice-versa, the ICA-based LMS algorithm that applies the score function to the estimation error is a generalization of the LMS algorithm for non-Gaussian signals, where Gaussian signals are characterized byonly up to the SOS. The MMSE noisesuppressingnonlinearities defined in Section 2 are governed by the score function e.Thus the error enhancement procedure helps the adaptive filter converge to the optimal solution when e is nonGaussian distributed. In addition, the error enhancement procedure can be interpreted as a generalization of the adaptive step-size procedure for any probability distribution of e or , as most of the signals encountered in reality are non-Gaussian distributed. It allows one to bring in prior statistical information to the LMS adaptive filtering as the step-size is adjusted nonlinearly for nonGaussian signals even though their statistics remain constant. Also, the scaling is an integral part of the MMSE nonlinearities and is implemented through the SNR. Hence, the error enhancement procedure is intrinsically capable of performing the DTD procedure when is a local speech signal, and its use with the LMS algorithm can be considered as a straightforward alternative to the NG algorithm.

IV.

SIMULATION RESULTS

The echo path impulse response used in the simulation has a length of 128 ms,consisting of 1024 coefficients at 8 kHz sampling rate.

439

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963

Figure 3: Echo signal.

ICA preprocessing is shown below in figure 4 and 5, which consist of different steps [13] like, 1.Centering : The most basic and necessary preprocessing is to center x, i.e. subtract its mean vector m = E{x} so as to make x a zero-mean variable, where xis random vector of mixed (signal + noise) signals. 2.Whitening : This means that before the application of the ICA algorithm (and after centering), we transform the observed vector xlinearly so that we obtain a new vector xwhich is white, i.e. its components are uncorrelated and their variances equal unity.One popular method for whitening is to use the eigen-valuedecomposition (EVD) of the covariancematrix E{xxT}=EDET, where E is the orthogonalmatrix of eigenvectorsof E{xxT} and D is the diagonalmatrix of its eigenvalues, D= diag(d1, ...,dn). Note that E{xxT} can be estimatedin a standard way from the available sample of x.Whitening can now be done by (15) x= ED1/2ETx

Figure 4 : Mixing of two signals

Figure 5 : Whitening of signals

In the post processing of ICA method FastICA algorithm was used.Figure 6 shows ERLE to show the performance of algorithm [6],the parameter a works as a scale factor. Figure 6 shows the efficiency of using the variable a.

440

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963

Figure 6 : Plots of the performances of the proposed algorithm with variable a.

V.

CONCLUSION

The error enhancement procedure has strong ties to the semiblind source separation (SBSS) based on the independent component analysis (ICA), which allows the recovery of a target signal among interfering signals when only some of the source signals are available. The error enhancement paradigm arises from a very simple notion that reducing the effect of distortion, linear or nonlinear, remaining in the residual echo after the AEC should provide for improved linear adaptive filtering performance in a noisy condition. The error recovery nonlinearity (ERN), applied to the filter estimation error before the adaptation of the filter coefficients, can be derived from well-established signal enhancement techniques based on statistical analysis. The combined technique evidently has deep connections to the traditional noise-robust AEC schemes, namely the adaptive step-size and the regularization procedures, and it can be readily utilized not only in the presence of an additive local noise but also when there is a nonlinear distortion on the acoustic echo due to, for example, a speech codec. The ERN technique can be viewed as a generalization of the adaptive step-size procedure for non-Gaussian signals encountered in most real-world situations. It is possible to advantageously circumvent the conventional practice of interrupting the filter adaptation in the presence of significant near-end interferences (e.g., double talk).

VI.

FUTURE WORK

This paper gives idea of use of frequency domain AEC during Double Talk situation ,the HOS-based adaptive algorithms are normally suited for batch-wise, offline adaptation such that a misspecification in the signal statistics, or PDF in general, does not diminish the effectiveness of ICA [3]. The performance of an ICA-based online adaptive algorithm depends on how well an adaptation procedure is modified to retain the advantage of batch learning, e.g., the use of so-called batchonline adaptation for SBSS in [9]. The error enhancement technique can be applied to the traditional multi-channel AEC [10] andcombined with a RES [11] with excellent results.There is no need to freeze the filter adaptation entirely during the double-talk situation when theerror enhancement procedure using a compressive ERN and a regularization procedure are combinedtogether appropriately. Such a combination allows the filter adaptation to be carried out continuouslyand recursively on a batch of very noisy data during the frequency-domain AEC.

ACKNOWLEDGEMENTS
I would like to thanks to my project guide Mr.S.L.Sahare for their valuable guidance.Above all I would like to thank my principal Dr. Madhuri Khambete, without whose blessing, I would not have been able to accomplish my goal.

REFERENCES
[1]T. S. Wada and B.-H.Juang, Enhancement of residual echo for improved acoustic echo cancellation, in Proc. EURASIP EUSIPCO, Sep. 2007, pp. 16201624. [2]T. S. Wada and B.-H.Juang , Acoustic echo cancellation based on independent component analysis and integrated residual echo enhancement, in Proc. IEEE WASPAA, Oct. 2009, pp. 205208.

441

Vol. 4, Issue 1, pp. 436-442

International Journal of Advances in Engineering & Technology, July 2012. IJAET ISSN: 2231-1963
[3]A. Hyvarinen, J. Karhunen, and E. Oja , Independent Component Analysis.John Wiley & Sons, 2001. [4]T. S.Wada, S.Miyabe, and B.-H. Juang, Use of decorrelation procedure for source and echo suppression, in Proc. IWAENC, Sep. 2008,paper no. 9086. [5]J.-M. Yang and H. Sakai, A robust ICA-based adaptive filter algorithm for system identification, IEEE Trans. Circuits Syst. II: Express Briefs, vol. 55, no. 12, pp. 12591263, Dec. 2008. [6]J. Benesty, D. R. Morgan and M. M. Sondhi, A better understanding and an improved solution to the specific problems of stereophonic acoustic echo cancellation, IEEE Trans. Speech Audio processing, vol. 6, no. 2, pp. 156-165, Mar. 1998. [7]S. Haykin , Adaptive Filter Theory, 4th ed. Prentice Hall, 2002. [8]F. Nesta, T. S. Wada, and B.-H. Juang, Batch-online semi-blind source separation applied to multi-channel acoustic echo cancellation, IEEE Trans. Audio Speech Language Process., vol. 19, no. 3, pp. 583599, Mar. 2011. [9]T. S. Wada and B.-H. Juang, Multi-channel acoustic echo cancellation based on residual echo enhancement with effective channel decorrelation via resampling, in Proc. IWAENC, Sep. 2010. [10] J. Wung, T. S. Wada, B.-H. Juang, B. Lee, T. Kalker, and R. Schafer, System approach to residual echo suppression in robust hands-free teleconferencing, in Proc. IEEE ICASSP, May 2011, pp. 445448. [11]T. Y. Al-Naffouri and A. H. Sayed, Adaptive filters with error nonlinearities: Mean-square analysis and optimum design, EURASIP Applied Signal Process., vol. 2001, no. 4, pp. 192205, Oct. 2001. [12] S. Gazor and W. Zhang, Speech probability distribution, IEEE Signal Process. Letters, vol. 10, no. 7, pp. 204207, Jul.2003. [13]Independent Component Analysis:Algorithms and Applications,Aapo Hyvrinen and Erkki Oja, Neural Networks Research Centre ,Helsinki University of Technology,P.O. Box 5400, FIN-02015 HUT, Finland, Neural Networks, 13(4-5):411-430, 2000

AUTHOR
Rohini Korde is currently pursuing Masters Degree program in signal processing in MKSSS Cummins College of Engg. For Women , Pune University, India.

Shashikant Sahare is assistant professor in electronics and telecommunication department of MKSSSS Cummins College of Engineering, Pune from Pune University, India.

442

Vol. 4, Issue 1, pp. 436-442

You might also like