The basic LMS Adaptive Algorithm

HPK taruh disini
One of the most successful adaptive algorithms is the LMS algorithm developed by Widrow and his coworkers (Widrow et al., 1975a). Instead of computing W_opt in one go as suggested by Equation 10.18, in the LMS the coefficients are adjusted from sample to sample in such a way as to minimize the MSE. This amounts to descending along the surface of Figure 10.6 towards its bottom. The LMS is based on the steepest descent algorithm where the weight vector is updated from sample to sample as follows:

where W_k and V_kare the weight and the true gradient vectors, respectively, at the kth sampling instant. µcontrols the stability and rate of convergence. The steepest descent algorithm in Equation 10 .19 still requires knowledge of R and P, since V_k is obtained by evaluating Equation 10.17. The LMS algorithm is a practical method of obtaining estimates of the filter weights W_k in real time without the matrix inversion in Equation 10 .18 or the direct computation of the autocorrelation and cross-correlation. The Widrow-Hopf LMS algorithm for updating the weights from sample to sample is given by

Clearly, the LMS algorithm above does not require prior knowledge of the signal statistics (that is the correlations R and P), but instead uses their instantaneous estimates (see Example 10.3). The weights obtained by the LMS algorithm are only estimates, but these estimates improve gradually with time as the weights are adjusted and the filter learns the characteristics of the signals. Eventually, the weights converge. The condition for convergence is:

where ƛ_max is the maximum eigenvalue of the input data covariance matrix. In practice, W_k never reaches the theoretical optimum (the Wiener solution), but fluctuates about it (see Figure 10.7).

Implementation of the basic LMS algorithm

The computational procedure for the LMS algorithm is summarized below.

  1. Initially, set each weight W_k(i), i = 0, 1, ... , N - 1, to an arbitrary fixed value, such as 0. For each subsequent sampling instants, k = 1, 2, ... , carry out steps (2) to ( 4) below: 
  2. Compute filter output
  3. Compute the error estimate
  4. Update the next filter weights



The simplicity of the LMS algorithm and ease of implementation, evident from above, make it the algorithm of first choice in many real-time systems. The LMS algorithm requires approximately 2N + 1 multiplications and 2N + 1 ad- ditions for each new set of input and output samples. Most signal processors are suited to the mainly multiply-accumulate arithmetic operations involved, making a direct implementation of the LMS algorithm attractive.


close
==[ Klik disini 2X ] [ Close ]==