You are on page 1of 30

FINAL MINI PROJECT REPORT ON

NOISY SINUSOID

BY- NAGENDRA KUMAR (124102042) MANISH SHAW (124102041)

( Group No.- 14)

UNDER THE GUIDANCE OF DR. A.K. KARTIK

1. A BREIF ANALYSIS OF THE PROBLEM STATEMENT

The given problem statement is to detect and identify the frequency and phase of a discrete sinusoid which has been corrupted by white noise of different variances. Mathematically, the signal can be represented by x[n] = sin(n + ) + N(0 , )

Where N(0 , ) represents the Additive White Gaussian Noise (AWGN) component with fixed zero mean and variable variance. And, is phase angle lies between [0, ]. The procedure followed in this work has been developed in the following steps Frequency estimation:A. B. C. D. E. F. Signal Input. Frequency domain transformation ( DFT / FFT Transform). Frequency estimation from maximum peak. Band pass filtering around estimated frequency. Inverse transformation into time domain ( IDFT/ IFFT ). Smoothing the signal by LOESS method.

Phase Estimation: G. Estimated every zero-crossing of the signal. H. Estimated phase for every zero-crossing. I. Took average of estimated phase.

2. FREQUENCY ESTIMATION
2.1 INTRODUCTION
The usual method of estimating a signal corrupted by additive noise is to pass it through a filter that tends to suppress the noise while leaving the signal relatively unchanged i.e. direct filtering. Filters used for direct filtering can be either Fixed or Adaptive. 1. Fixed filters - The design of fixed filters requires a priori knowledge of both the signal and the noise, i.e. if we know the signal and noise beforehand, we can design a filter that passes frequencies contained in the signal and rejects the frequency band occupied by the noise.

2. Adaptive filters - Adaptive filters, on the other hand, have the ability to adjust their impulse response to filter out the correlated signal in the input. They require little or no a priori knowledge of the signal and noise characteristics.(If the signal is narrowband and noise broadband, which is usually the case, or vice versa, no a priori information is needed; otherwise they require a signal (desired response) that is correlated in some sense to the signal to be estimated.) Moreover adaptive filters have the capability of adaptively tracking the signal under non-stationary conditions. During frequency estimation process, first we find out the frequency before passing signal through filter. So we used fixed filter. And, during phase estimation process we have used adaptive process to estimate phase if frequency was not known by transmitter and receiver. 2.2 Frequency domain transformation ( DFT Transform): We received the signal in time domain. Generally we do not have any procedure to get information about frequency through time domain signal. So we have to find an approach to convert any another domain that can give the information about frequency. And most important thing of transformation process should be that we can get the original signal from that domain by inverse transform. One of transformation process that can convert from time domain to frequency domain and viceversa is Discrete Fourier Transform/ Inverse Discrete Fourier Transform (DFT/ IDFT). So, we have used Discrete Fourier Transform/ Inverse Discrete Fourier Transform ( DFT/ IDFT ) in our project. The mathematics of the above mention transform is as follows DFT: F(k) = DFT{f(n)} = where k = 0, 1, , N-1.

Since MATLAB does not support zero indices, the values are shifted by one index value to F(k) = DFT{f(n)} = ( Because we did our analysis through Matlab software). The inverse transform follows accordingly as accordingly as IDFT: f(n) = {F(k)} = where n = 1, , N. where k = 1, , N.

In Matlab, the function fft computes the discrete Fourier Transform of the time domain signal and it assumes left corner as the zero. Hence, the fftshift command is used to reset the zero point to the centre of the plot. This way, both positive and negative frequency components can be verified and operated on. In cases where the length of the data is a power of 2, or a product of prime factors, fast Fourier transform (FFT) algorithms are employed to compute the discrete

Fourier transform. And the function ifft computes the inverse discrete Fourier Transform of the frequency domain signal. 2.3 Frequency estimation from maximum peak: After applying the discrete Fourier Transform of the received signal, we plot its absolute magnitude graph. Graph is given below.

Figure 1: DFT of noisy sinusoid We observed two main points from above plot. a. Most of signal lies near about two point (i.e. nearby positive and negative component of original signal). b. Plot is symmetric about mid-point of x-axis. (i.e. data. The generation of the frequency axis depends on the sampling rate of the input signal in the time domain. Hence, the values from -0.5 to 0.5 are uniformly split into the number of samples of the source signal since any signal which has been sampled higher than its nyquist rate has its spikes lying between and which translates to the frequencies of range -0.5 and 0.5. So we translate [0,N] into the discrete frequency range by following formula. Discrete frequency = -0.5 + Original sinusoidal signal contains only positive and negative component of frequency but Additive White Gaussian Noise (AWGN) contain all frequency range between [- , +] rad. So after getting mixed original signal with additive white gaussian noise received signal contains all frequency component between [-0.5, 0.5]. But most of signal lies near by the original sinusoidal frequency. PEAK IDENTIFICATION + 1). Where N indicates the length of

Now, we perform a peak identification of the fourier transform of the noisy sinusoid. The Fine Peak Identification is performed as follows through function pid. The increasing and decreasing trends of the curves are detected and labelled. Each change from decreasing to increasing is considered as the border between two sub-peaks and is stored. Hence, the area between two such successive values holds a sub-peak of the plotted frequency domain representation.

After getting the index value of maximum peak, we convert that index value in frequency by the following formula. Estimated sinusoidal frequency = -0.5 + 2.4 Band pass filtering around estimated frequency: An ideal band pass filter would have a completely flat pass band (e.g. with no gain/attenuation throughout) and would completely attenuate all frequencies outside the pass band. In practice, no band pass filter is ideal. The filter does not attenuate all frequencies outside the desired frequency range completely; in particular, there is a region just outside the intended pass band where frequencies are attenuated, but not rejected. Additive White Gaussian Noise (AWGN) contains sinusoids of all frequencies from - to . Hence, it is logical to filter the noise of those frequencies which are not close to the sinusoid frequency ( i.e. estimated frequency). This can easily be done with the help of a band pass filter. A band pass filter is a system which allows only sinusoids whose frequencies fall within the pass band of the filter to pass through it. An ideal band pass filter is characterized by a transfer function which looks like below.

Figure 2: An ideal band pass filter 2.5 DFT of noisy sinusoid after filtering

Figure 3: DFT of noisy sinusoid after filtering From above figure, we observed that it contains the signals only nearby signal frequency and band pass filter makes other frequency component to zero. 2.6 Inverse transformation into time domain:

This frequency domain signal is then reset with its zero to the left end by using the MATLAB function ifftshift and the Inverse Discrete Fourier Transform is then computed to obtain a signal with much lesser noise. 2.7 Smooth the estimated signal: We can minimize some percentage of error in estimated sinusoid by smooth process. We have several methods to smooth the corrupted signal. Our signal was corrupted by White noise. LOESS smooth gives better approximation for signal that is corrupted by white noise. So, In our project we have used LOESS method to smooth the estimated signal. LOESS : It function as local regression using weighted linear least squares and a 2nd degree polynomial model.

3. PHASE ESTIMATION
We have several method to estimate the phase like cross-correlation method, zero-crossing method etc. We are using zero-crossing method because in our case signal get corrupted by Additive White Gaussian Noise (AWGN) and it contains all frequency range from 0 to . It means original gets effected at every frequency, due to this amplitude of original discrete signal get change at every points. It results in phase change between original signal and estimated signal. If we have previous information about frequency (here information about frequency means exact frequency not estimated frequency) then we can calculate phase through the help of zero crossing of the signal .In case 4 receiver does not signal frequency. So first we calculated the frequency and we observed that it was not exact signal frequency. So if we compute phase from this it leads more error in phase estimation process. So, we tried to modify the time period for every zero-crossing. Formula to find the time-period, we have discussed later . We are explaining the zero-crossing algorithm by the help of following figure.

Figure 4: Shows relation between two consecutive zero-crossing in discrete signal vs. time period in continuous signal From above continuous sinusoid plot we observe that the time period of plot is 12.5 units. It means 12.5 units change in horizontal direction cause of 2 radians phase cahnge. So 1 unit horizontal change cause of phase change 12.5 units) Similarly, X units horizontal change cause of phase change radians And we can also say that units change cause of phase change = radians. Here is time * X ) radians = radians = radians. (Because here T =

period of zero-crossing. In other words, we can also say that distance between any two

consecutive zero-crossing causes of phase change radians and this property is applicable for discrete sinusoids also. 3.1 Procedure to find out the phase: In our problem phase is random variable with a uniform distribution in the interval [0, ] but we mention earlier phase get change due to Additive White Gaussian Noise (we can observe same in figure 3). So phase in estimated signal go beyond the interval [ 0, ]. So we are considering phase lies in the interval [ 0, 2 ]. An exhaustive search for the correct phase from 0 to 2 will be highly time consuming. Hence, first we divide phase in two parts 0 to and to 2. Case 1: Sinusoid starts in the positive half-cycle (positive phase) then phase lies between 0 to . Case 2: Sinusoid starts in the negative half-cycle (negative phase) then phase lies between to 2. Case 1: Let the first zero crossing takes place at M , we will explain after this). ( can be in fraction, Procedure to find out

Now the phase of the sinusoid from the first zero crossing can be stated as = * , Here T is time period for continuous sinusoid because in

discrete sinusoid more than two zero crossing can take place. Let the second zero crossing takes place at Phase of the sinusoid from the second zero crossing can be stated as = 2 * ,

Similarly we calculated up to last zero-crossing of the signal and took average of all phase. By this, we get better approximation of phase. Case 2: We have similar process to calculate phase as case 1 except extra radian phase. i.e. = + . ):

Formula to find out distance between origin and zero-crossing ( = ( Number of samples before ith zero-crossing) + Formula to find out time-period for every zero-crossing( ):

Time-period for first zero-crossing = Time period for second zero-crossing = 2 *( Time-period for 3rd onwards ( Where And and ). )*

th ) zero-crossing = (

are distance of first and second zero-crossing from the origin respectively.

is distance of jth zero-crossing from the origin.

4. Signal classification based on the information available with the receiver regarding x[n] : Case 1: x[n] = Sin[ n]; is known to both the transmitter and receiver

Since transmitter and receiver both know sinusoid frequency and has to phase zero i.e. =0. We can directly pass the received signal to band pass filter that has filtering range near by . Second method is that we can directly construct a sinusoidal signal with the known frequency and amplitude.

Figure 5: Input sinusoid and estimated sinusoid when receiver knows signal frequency and phase is zero. Case 2: x[n] = Sin[ n]; is unknown to the receiver and = and belongs to 0 to

10

Since, receiver does not know transmitted frequency. So first we find out the frequency by previous define method then we filtered the signal. Since we did not able to find out exact frequency, it leads more estimated error with respect to case 1.

Figure 6: Input sinusoid and estimated sinusoid when receiver does not know signal frequency and phase is zero. Case 3: x[n] = Sin[ n + ]; is known and is unknown to the receiver

Figure 7 : Input sinusoid and estimated sinusoid when receiver knows signal frequency and phase is non-zero Case 4 x[n] = Sin[ n + ]; & both are unknown to the receiver

11

Figure 8: Input sinusoid and estimated sinusoid when receiver does not know signal frequency and phase is non-zero. From case 3 and case 4 we observe in case 4 estimated phases is more with respect to case 3 for same frequency, phase and number of samples. There is only one difference that in case 3 receiver knew the sinusoid frequency where as in case 4 receiver did not know sinusoidal frequency. From the above estimated signal, we observed that signal cant remain exact discrete periodic and previously mentioned during theory explanation, we need information about frequency of signal. So in case 4 we calculated the time period for every zero-crossing (i.e. adaptive method). We used estimated frequency only for first zero-crossing and for remaining zero-crossing we used new time period that we calculated from the zero-crossing because every two consecutive zero-crossing cause of radians phase and radians cause half of time period in continuous sinusoids. Through this method we get updated time period for every zero-crossing and we got better estimation than fixed estimatedfrequency. 5. ANALYSIS OF THE OBTAINED RESULTS

The analysis of results is done on following basic aspects.

Change with increasing variance, while keeping the frequency constant. Change with increasing no. of samples Estimation of period of x[n] by examining y[n]. Average error in estimating the phase while receiver knows sinusoidal frequency.

12

Average error in estimating the phase while receiver does not know sinusoidal frequency. Information conveyed (Entropy) in above mention four cases. 5.1 INCREASING VARIANCE

Plot for Variance = 0.05

Figure 9: Transmitted signal, AWGN with N(0, 0.05), received signal and estimated signal Plot for Variance = 0.2

13

Figure 10: Transmitted signal, AWGN with N(0, 0.2), received signal and estimated signal Plot for Variance = 1

Figure 11: Transmitted signal, AWGN with N(0, 1), received signal and estimated signal

14

Plot for Variance = 1.5

Figure 12: Transmitted signal, AWGN with N(0, 1.5), received signal and estimated signal From above plot, We observed that as the value of variance increases, it means SNR decreases. So we are getting more deviation in the reconstructed sinusoidal signal. This can be attributed to the following reasons:

The ideal band pass filter removes noise far from the frequency of the sinusoid. It does not remove the noise presents nearby the frequency of the sinusoid. It means some noise remains in signal after passing through band pass filter also.

5.2 INCREASING NUMBER OF SAMPLES

15

Estimated frequency (in rad/sec) for variance = 0.05 i.e. SNR = 10 dB

Table 1: Estimated frequency and estimated error for deferent frequency and fix variance = 0.05 Estimated frequency in(rad/sec) for variance = 0.2 i.e. SNR =3.98 dB

Table 2: Estimated frequency and estimated error for deferent frequency and fix variance = 0.2 Estimated frequency(rad/sec) for variance = 1 i.e. for SNR = -3.01 dB

16

Table 3: Estimated frequency and estimated error for deferent frequency and fix variance = 1.0 Estimated frequency(rad/sec) for variance = 1.5 i.e. SNR = -4.77 dB

Table 4: Estimated frequency and estimated error for deferent frequency and fix variance = 1.5 Results show that when we are increasing the number of samples, we are able to estimate more accurate frequency. This is because of the following reasons:

When the number of samples is increased, the estimate of the frequency is much more accurate as can be seen from the result. Hence, the algorithm locks into the correct frequency due to lesser number of deviating frequencies available.

17

In case 512 samples are too high and 64 samples is too inaccurate, 256 can be chosen as an intermediate with reasonable accuracy and computational time.

Hence, the accuracy of estimated frequency depends on the number of samples and very less on SNR (practically independent from SNR).

5.3 Estimation of period x[n] by examining y[n]:


By examining y[n], We are not able to determine period of x[n] more precisely in every case, especially when SNR is negative. But, if we are taking the average distance between every two consecutive zero crossing point of signal. Then we are able to determine period of x[n] more precisely.

5.4 Average error in estimating the phase while receiver knows sinusoidal frequency:
In phase estimation process, while keeping other variable constant example. frequency, number of sample etc. we get different phase for same input phase, if we repeat experiment again and again unlike frequency estimation process where we get estimated frequency same, if we repeat experiment again and again while keeping other variable constant. So to get more accurate phase we took average of five such experiment. Cause of different estimated phase for same input phase: To understand this, we are taking 3 samples of Additive White Gaussian Noise (AWGN) N(0, 0.2) (i.e. in our case it results in SNR=10).

Sample: A

18

Sample: B

Sample: C We observed from above three Additive White Gaussian Noise (AWGN) samples, it does not have same amplitude at same index number. Example, at index one, sample A has amplitude 0.22, sample B has amplitude -0.1 and sample C has amplitude 0.65. Due to this receiver receives different sinusoid sample even if we keep constant every parameter like as frequency, phase and number of samples. Since our algorithm is based on amplitude and index value of the signal, error in amplitude cause of error in estimated phase. When signal frequency fo is 0.08 rad/sec

19

Table 5: Actual phase, Average estimated phase and average estimated error with fo = 0.08 rad/sec and known to receiver. Note: Here average means we have estimated phase for 5 samples with same parameter (frequency, input phase and number of samples) and took average of them.

5.5 Average error in estimating the phase while receiver does not know sinusoidal frequency:
When signal frequency fo is 0.08 rad/sec

20

Table 6: Actual phase, Average estimated phase and average estimated error with fo = 0.08 rad/sec and unknown to the receiver. When signal frequency fo is 0.20 rad/sec

Table 7: Actual phase, Average estimated phase and average estimated error with fo = 0.20 rad/sec and unknown to the receiver.

21

Observation in case 3 and case 4:

For same signal frequency, we observed that estimated phase error in case 4 is more with respect to case 3 for all variance. This is because of simple and absolute reason, in case 3 receiver knew the signal frequency that leads to better phase estimation where as in case 4 receiver did not know signal frequency. So first we calculated signal frequency then we proceeded for phase estimation. In other words we can tell that accuracy of phase estimation process is depending on the accuracy of the estimated frequency. From table six and table seven we also observed that more signal frequency leads to more error while estimating the phase. More frequency means time period is less it means the distance between two consecutive zero-crossing in signal get reduce. It leads to more estimated phase error.

5.6 Information conveyed (Entropy) in above mention four cases:


Information received is inversely proportional estimated error. It means more error results in less information and vice-versa. And entropy (or uncertainty) is inversely proportional to information. It means entropy is directly proportional to estimated error i.e. more error leads to more entropy and vice-versa. Phase estimation process requires information about frequency. It means phase estimation process leads to more estimated error, if we have we have not estimated frequency correctly. By our experiment we observed that error in case 2 is more than case 1 and error in case 4 is more than case 3. So we can tell that i. ii. We received more information in case 1 in comparison of case 2. In other words we can tell that case 2 has more entropy than case 1. We received more information in case 3 in comparison of case 4. In other words we can tell that case 4 has more entropy than case 3.

But our main question is how to relate all four cases with respect to information or entropy. From our experiment we also observed that uncertainty in estimated phase is more than estimated frequency. It means entropy in phase estimation process is more with respect to frequency estimation process. So we can conclude the flowing. i. ii. Information received in case 1 > Information received in case 2 > Information received in case 3 > Information received in case 4. Entropy (or uncertainty) in case 1 < Entropy in case 2 < Entropy in case 3 < Entropy in case 4.

SUBTASK
22

6. M1-T1-A Period of discrete sinusoid x[n] A discrete-time sinusoid can be expressed as X[n] = Acos(wn + ), Where n is an integer. Properties: A discrete-time sinusoid is periodic only if its frequency f is a rational number. Discrete-time sinusoids whose frequencies are separated by an integer multiplde of 2 are identical. So, The highest rate of oscillation in a discrete-time sinusoid is attained when w = (or w = -) or, equivalently f= 0.5 Hz ( or f= -0.5 Hz). : - < n <

Period for different value of of A. = /5 = ; N=10; B. = 3/5 N= ; C. = ; N=2;

Where N is Time period of discrete sinusoids & a rational number.

D.

=2 N= , Here N is not a rational number, so for

= 2 discrete sinusoid is not periodic.

Estimation of Time-period through Matlab using STEM() function: A. = /5

Discrete plot for given frequency:

23

By above plot, we observe that signal is repeating after N=10. So, we can say that time-period for above frequency is 10 i.e. N=10. = 3/5 Discrete plot for given frequency:

By above plot, we observed that signal complete the cycle 18 times between 0 to 600. Time-period N= (600 * 0.1)/18= = Discrete plot for given frequency:

24

By above plot, we observe that signal complete the cycle 5 times between 0 to 100. Time-period N= (100 * 0.1)/5 = 2 =2 Discrete plot for given frequency:

In the above plot, we observed that 1st zero amplitude of sinusoid is occurring at near by zero, but we can not able to find out exact 2nd zero on the plot, so we conclude that signal is Aperiodic.

25

7. Codes for phase and frequency estimation:


%Codes for frequency Estimation
tax_length=256; t = 1:tax_length;

tin = 1:10000; % A random frequency is generated fo =(rand(1))/2; %It limits the frequency between [0 0.5] disp(sprintf('Suppilied Frequency= %g', fo)); % The sinusoid is generated x = sin(2*pi*fo*t); xn = awgn(x,4.77); % AWGN is added to the sinusoid

% The frequency axis is generated f = linspace(-0.5,0.5,length(t)); % The fourier transform of the sinusoid is computed xnfft = fft(xn); recons=ifft(xnfft); xnfft1 = fftshift(xnfft); xnfft1 = abs(xnfft1); % Peak Identification of the FFT is done and the index with the maximum % value is found peaks = pid(xnfft1); m_index = mpeak(xnfft1,peaks); %stem(m_index); % The index is then translated to the respective value of the frequency freq =-0.5+ m_index/length(t); mf = abs(freq); disp(sprintf('Estimated Frequency= %g', mf)); %nOW, THE bandpass filtering is done to obtain the filtered sinusoid nl=-mf-5/length(t); nh=-mf+ 4/length(t); pl= mf-4/length(t); ph= mf+5/length(t); xbp=bfilt(xn,nl,nh,pl,ph); bpfft=fft(xbp); bpfft1=fftshift(bpfft); bpfft1=abs(bpfft1);

subplot(4,1,1); stem(x); xlabel('x[n],Input sinusoid,') ylabel('Amplitude') subplot(4,1,2); stem(xn-x); xlabel('w[n],AWGN, mean=0')

26

ylabel('Amplitude') subplot(4,1,3); stem(xn); xlabel('y[n],Noisy sinusoid') ylabel('Amplitude') subplot(4,1,4); stem(ifftshift(xbp)); xlabel('x~[n],Reconstructed Signal') ylabel('Amplitude') %plot of FFT for Before and after filtering %subplot(2,1,1); stem(xnfft1); %xlabel('DFT of Noisy sinusoid') %subplot(2,1,2); stem(bpfft1); %xlabel('DFT of Noisy sinusoid after filtering')
%-----------PHASE ESTIMATION-------------------if xbp(1) >= 0 check = 1; xtra_ph=0; else check = 2; xtra_ph = pi; end cur_pos=1; reset_vl=0; extra_ph=0; pse_ctr=1:100; %tot_ph=1; ph_ctr=1; sum_phase=0; tp_curnt=0; fst_zc=0; sec_zc=0; tp_act=0; while (cur_pos < tax_length) if check ==1 if xbp(cur_pos)<0 extra_phase= ((xbp(cur_pos-1)) / ((xbp(cur_pos-1) xbp(cur_pos)))); %extra_phase actually gives information about xtra time period if ph_ctr ==1 tp_curnt = 1 / (mf * 1.02); fst_zc= (cur_pos -1) + extra_phase; phase(ph_ctr)= ((ph_ctr) * pi) - (((cur_pos -1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph; elseif ph_ctr == 2 sec_zc = (cur_pos - 1) + extra_phase; tp_curnt = 2 * (sec_zc - fst_zc); phase(ph_ctr) = ((ph_ctr)* pi ) - (((cur_pos - 1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph; else tp_curnt = (cur_pos -1) - fst_zc + extra_phase; tp_curnt = (tp_curnt /(ph_ctr - 1 )) *2; phase(ph_ctr) = ((ph_ctr)* pi ) - (((cur_pos - 1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph;

27

end check=2; %It check every zero cross-over reset_vl=0; sum_phase= (phase(ph_ctr)) + sum_phase; %disp(sprintf('Estimated Phase through check 1= %g, ph_ctr = %g, timeP = %g', phase(ph_ctr), ph_ctr,(tp_curnt))); ph_ctr= ph_ctr + 1; continue; else reset_vl = reset_vl + 1; cur_pos=cur_pos + 1; end end if check ==2 if xbp(cur_pos) >=0 extra_phase= ((xbp(cur_pos-1)) / ((xbp(cur_pos-1) xbp(cur_pos)))); %extra_phase actually gives information about xtra time period if ph_ctr ==1 tp_curnt = 1 / (mf * 1.02); fst_zc= (cur_pos -1) + extra_phase; phase(ph_ctr)= ((ph_ctr) * pi) - (((cur_pos -1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph; elseif ph_ctr == 2 sec_zc = (cur_pos - 1) + extra_phase; tp_curnt = 2 * (sec_zc - fst_zc); phase(ph_ctr) = ((ph_ctr)* pi ) - (((cur_pos - 1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph ; else tp_curnt = (cur_pos -1) - fst_zc + extra_phase; tp_curnt = (tp_curnt /(ph_ctr - 1 )) *2; phase(ph_ctr) = ((ph_ctr)* pi ) - (((cur_pos - 1) + extra_phase) * ((pi * 2)/( tp_curnt))) + xtra_ph; end check=1; %It check every zero cross-over reset_vl=0; sum_phase= (phase(ph_ctr)) + sum_phase; % disp(sprintf('Estimated Phase through check 2= %g, ph_ctr = %g', phase(ph_ctr), ph_ctr)); ph_ctr= ph_ctr + 1; continue; else reset_vl = reset_vl + 1; cur_pos= cur_pos + 1; end end end total_phase= sum_phase / (ph_ctr - 1); disp(sprintf('input Phase = %g, Final Estimated Phase = %g, phi,total_phase)); %------Phase Estimation process end ---------------

SNR= 10',

28

%Checks for increasing and decreasing trends of the plot to decide the peak %positions function [ peak2 ] = pid( plot1 ) [length breadth] = size(plot1); k=1; for i = 2:breadth if (plot1(i) > plot1(i-1)) peak1(k) = plot1(i); k = k+1; else peak1(k) = 0; k = k+1; end end peak2(1) = 1; [L B] = size(peak1); m=2; for i = 2:B if peak1(i)~=0 && peak1(i-1)==0 peak2(m) = i; m = m+1; end end peak2(m) = breadth; end %it scans the different identified peaks of the given array and identifies %the highest peak function [ max_val ] = mpeak( input_var, peak_array ) max_temp = -999999; for i=1:(length(peak_array)-1) for j=peak_array(i):peak_array(i+1) if max(input_var(peak_array(i):peak_array(i+1)))>max_temp max_temp = max(input_var(peak_array(i):peak_array(i+1))); max_peak = [peak_array(i) peak_array(i+1)]; end end end max_temp2 = -999999; for i = max_peak(1):max_peak(2) if input_var(i) > max_temp2 max_temp2 = input_var(i); max_val = i; end end

29

end %Band Pass Filter Process function [ output_sig ] = bfilt( input_sig, nl, nh, pl, ph ) % The FFT of the input signal is taken and is fftshifted ffti = fft(input_sig); ffti = fftshift(ffti); ffti = smooth(ffti,'loess'); %For smoothing the noisy signal. % The frequency scale is generated f = linspace (-0.5,0.5,length(input_sig)); % The Band-Pass filter is applied to the FFT of the input signal for i = 1:length(ffti) if f(i) < nh && f(i) > nl fftibp(i) = ffti(i); % elseif f(i) > pl && f(i) < ph fftibp(i) = ffti(i); else fftibp(i) = 0; end end % The output is defined output_sig = ifftshift(fftibp); output_sig = ifft(output_sig); for i = 1:length(output_sig) if real(output_sig(i)) >= 0 output_sig(i) = abs(output_sig(i)); else output_sig(i) = -abs(output_sig(i)); end end end

30

You might also like