Encuentro Iberoamericano sobre modelos estocasticos y aplicaciones Interdisciplinares
CIMAC edition:07-08 location:La Laguna, Tenerife date:5-9 May 2008
Identification or model reduction of a linear system can be done in the time or in the frequency domain. In some cases, it can be advantageous to use frequency domain data. However, maximum likelihood estimation in the frequency domain is notorious for its seemingly unavoidable numerical problems. In essence one has to solve a rational, hence a nonlinear, approximation problem. It therefore involves an iterative procedure where in each iteration step linear systems of equations have to be solved. In its traditional setting, these linear equations have a matrix with a huge condition number. This means that during the computation most or all of the accuracy is lost which may prevent convergence or at least forms a barrier for attaining some desired accuracy.
These numerical problems are caused by the choice of the basis that is used to represent the rational model. If this basis is far from orthogonal with respect to the appropriate metric, then these numerical problems can indeed be expected, even for moderate size problems.
So the challenge is to choose an appropriate representation of the rational model that can be computed in a numerically stable way. This means that the parameters to be computed in the linearized problem, should appear as coefficients in an orthogonal or nearly orthogonal basis. Here orthogonality is with respect to a metric that is tuned to the approximation roblem one wants to solve. To keep the efficiency of maximum likelihood estimation, computing this appropriate basis should be possible in an efficient and numerically stable way itself, otherwise all the efficiency and accuracy could be lost in this step. This is a kind of preprocessing step which is sometimes called a ``whitening'' step.
In this lecture it will be shown why some of the traditional methods may fail numerically, and why other methods may keep condition numbers well under control during the computation. We shall present two methods that achieve the latter.
In the fist approach we linearize the problem and represent the rational model as a couple of polynomials. In other words a vector with two polynomial entries in the case of a scalar input-output system. The technique can however be generalized for multi-input multi-output systems at the expense of technical complications. The idea is to write this polynomial vector as a combination of polynomials that are ``as orthogonal as possible'' with respect to an appropriate metric which is associated with the approximation criterion. This metric may depend upon the approximation and hence may be varying during the approximation process. Once the linearized problem is solved, it can be used as the starting point in the iterative procedure to solve the nonlinear problem. During this last stage the orthogonality of the basis can not be kept. But in most practical applications the starting point is really close to the desired solution so that deviation from the orthogonal basis is not a major numerical problem anymore.
In the second approach we shall represent the rational model as a linear combination or rational basis functions which again are chosen to be orthogonal with respect to the appropriate metric. Efficient recursive methods exist for the computation of this orthogonal basis. However, here the recursion is still rather unstable for high orders of the model. It will require deeper investigation of the linear algebra techniques to compute also in this case models of high order in a numerical stable way.