A Second Order Arnoldi Method with Stopping Criterion and Reduced Order Selection for Reducing Second Order Systems

This paper introduces a new algorithm for reducing large dimensional second-order dynamic systems through the Second Order Arnold Reduction (SOAR) procedure, with a stopping criterion to select an acceptable good order for the reduced model based on a new coefficient called the NumericalRank Performance Coefficient (NRPC), for efficient early termination and automatic optimal order selection of the reduced model. The key idea of this method is to calculate the NRPC coefficient for each iteration of the SOAR algorithm and measure the dynamic evolution information of the original system, which is added to each vector of the Krylov subspace generated by the SOAR algorithm. When the dynamical tolerance condition is verified, the iterative procedure of the algorithm stops. Three benchmark models were used as numerical examples to check the effectiveness and simplicity of the proposed algorithm. Keywords-model order reduction; second-order systems; second-order Krylov sub-spaces; second-order Arnoldi procedure (SOAR); structure preserving; stability; projection; state space


INTRODUCTION
The large-scale second-order model is considered a wellknown representation for the modeling of the dynamic behavior of multivariable complex systems in various fields of science and engineering, such as electrical, mechanical, structural, electromagnetic, and micro-electromechanical systems (MEMS). Some of these systems encounter computational problems in simulation due to the huge model order to treat this problem. Therefore, a reliable approximate model with reduced order is intended, which could replace the original in simulation or control and preserve the second-order structure of the original system and the same key properties, such as stability [1,2].
This study examined the following large-scale system given in the second-order form: where M ∊ ℝ N×N is an invertible matrix, q(0)=q 0 and q̇(0)=q̇0 are initial conditions, M, D, and K ∊ ℝ N×N are respectively the mass, damping, and stiffness matrices as known for mechanical models, q(t) ∊ ℝ N is the state variables vector, and b, l ∊ ℝ N represent the input and the output measurement matrices, respectively [1]. A Model Order Reduction (MOR) of the second-order system Σ N using the Second-Order Arnoldi (SOAR) algorithm was investigated. The SOAR approach has attracted many researchers in recent years and has been used to solve the following problems: a quadratic Eigen-value [3,4], the MOR of second-order dynamical systems [1,6], and in the analysis of structural acoustics [5,6]. From a mathematical point of view, the SOAR design is based on a projection-based MOR technique that uses a second-order Krylov subspace and the SOAR procedure to generate the projection matrix as follows: in the first step a recurrence formula is defined for the two matrices coefficient A and B and one or two initial vectors, and then, in the second step, an orthonormal basis of projection subspaces from the famous second-order Krylov subspace defined in the recurrence formula is generated. The SOAR method is used in MOR, which constructs another reduced second-order state-space system Σ n with reduced order, where the input-output behavior dynamics are completely recovered, i.e., preserving the basic characteristics of the full order system [1,2].
This study proposes an automated method to generate the best reduced-order model for a large second-order system using the SOAR procedure, by defining a new criterion to auto-stop the iteration process in the SOAR procedure and auto-select an acceptable reduced order of the projection matrix. The efficiency and robustness of the proposed algorithm were validated by various well-chosen numerical examples of second-order models.

A. Problem Formulation
The analysis of the second-order system Σ N leads to building a very large complex model. Thus, mathematical tools are needed to reduce the computational complexity and accelerate the modeling task by constructing a reduced-order system Σ n . The proposed tools preserve the characteristic properties of the original system, such as the second-order form and stability [2,10]. The reduced second-order system Σ n is defined by: where z(t) represents the state vector with dimension n ≪ N, and the matrices M n , D n , and K n ∊ ℝ n×n are the mass, damping, and stiffness matrices respectively, as known in structural dynamics. The vectors b n and l n ∊ ℝ n are the input distribution and output measurement respectively. Applying the Laplace transform, the transfer function of the original second-order model is given as: The power series of the Laplace transformation Q(s)=L(q(t)) can be expressed as: with R l being the l th order system moments. The above secondorder system Σ Ν can be rewritten in first-order form by taking the following state vector ‫ݔ‬ሺ‫ݐ‬ሻ = ‫ݍ‬ሶ ሺ‫ݐ‬ሻ ‫ݍ‬ሺ‫ݐ‬ሻ ൨: Equation (5) can be expressed in matrix form as: where: are respectively the input and the output of the first-order system representation. The transfer function h(s) can be rewritten as: (7) and the power series of the transfer function h(s) as: The design of MOR Krylov subspace techniques is based on moment matching methods. Its principle is to replicate the moments of the original system Σ N on the moments of the reduced system Σ n [11][12][13].

B. SOAR Algorithm for Second-Order Systems
This section describes the definition of a second-order Krylov subspace using the SOAR algorithm. The second-order Krylov subspace is defined as: where: where A, Z ∊ ℝ n×n are square matrices called constant matrices and w ∊ ℝ n×1 is a column vector called the starting vector. The sequence r 0 , r 1 , ..., r n-1 is known as Krylov second-order basic blocks [1].
The G n (A, Z; w) subspace defined in (9) is called an n th second-order Krylov subspace [2,10,11]. There is a connection between the subspace G n (A, Z; w) and the standard Krylov subspace K n (A, w) defined in (11), while G n (A, Z; w) can be considered as a generalization of K n (A, w).
‫ܭ‬ ሺ‫,ܣ‬ ‫ݓ‬ሻ = ‫,ݓ‪ܽ݊ሺ‬ݏ‬ ‫,ݓܣ‬ ‫ܣ‬ ଶ ‫,ݓ‬ … , ‫ܣ‬ ିଵ ‫ݓ‬ሻ (11) In the case of the matrix Z = 0, the second-order Krylov subspace G n (A, Z; w) is equal to the general Krylov subspace K n (A, w). The application of the vector sequence {r j } of the second-order Krylov sub-space to obtain an orthonormal basis {r 0 , r 1 , ..., r n-1 } is called SOAR (Second Order ARnoldi) algorithm [2]. The following Lemma 1 can be used in the proof of Theorem 1 [10]: • Lemma 1: Let T n ∈ ℝ n×n be an upper Hessenberg matrix.
The k entry ܶ ݁ ଵ is zero for k = j+2, ..., n and j = 0, 1, ..., n- • Theorem 1: Let Q n be the orthonormal matrix defined by the sequence vectors Q n ={q 1 , q 2 ,…, q n }, where the vectors q i with i =1, ..., n are generated by the SOAR procedure after the execution of n iterations. Then the analysis of the relationship between the j th system's moment r j and the output of the SOAR procedure can be expressed as [1]: In particular: Theorem 2 provides a sufficient condition for the n first output vectors moment of the reduced and the original systems to match.
• Theorem 2: Let Q n be the projection matrix generated by the SOAR algorithm to reduce the order of the second-order model. If ‫ݎ‬ ∈ colspanሺܳ ሻ for j = 0, ..., n then ‫ݎ‬ = ܳ ‫̂ݎ‬ [1]. This theorem helps to verify that the condition of the output moment matching can be maintained.
• Theorem 3: The first unmatched moment is obtained after the execution of the n th iteration of the SOAR procedure, and the error ∆݉ ାଵ between the two moments ∆݉ ାଵ of the original and ݉ ෝ ାଵ of the reduced system can be given by an analytical expression [1]:

C. SOAR Procedure with Deflation and Memory Saving
In the SOAR algorithm, the p n set vector is closely related to q n , and the bi-product vector p n is utilized. To avoid explicit references and updates of p vectors, a new version of SOAR, shown in Figure 1, was presented in [10] to reduce the memory requirements by almost half [15]. Since p 1 = 0: III. PROPOSED STOPPING CRITERION FOR SOAR Finding a suitable order q for the reduced model that leads to a better approximation is an important component of order reduction. The question is when to pause an iterative order reduction [16]. The two traditional techniques to stop an iterative MOR approach based on the Krylov subspace are:

A. Finding the Zero Vector
Although it has an automatic implementation, the conventional approach consisting of finding the zero vector (t j+1 = 0 in the SOAR procedure) to interrupt the process is extremely ineffective. The major drawback of this method is that it adds a lot of redundancy to the transformation matrix and duplicates the same information about the dynamic behavior of the original model once and again.

B. Time Response Comparison and Manual Termination
In this method, a reduced-order model is generated in each iteration of the SOAR procedure, a time response comparison is made for both models, the reduced and the original one, and the process is terminated if the errors are acceptable. Some fundamental definitions of matrix factorization should be presented before the implementation and application of the new criterion for stopping and selecting the reduced order in the SOAR procedure.
• Definition 1: Let A ∊ C n×m , and suppose rank(A) = r, and n ≤ m. Then, there exist matrices U ∊ C n×n , V ∊ C m×m , and Σ ∊ ℝ n×n such that: U and V are unitary, and ∑ ൌ ∑ 0 0 0 ൨.
Columns U and V are called the left and right singular vectors respectively. The index r of the smallest singular value is called the theoretical rank of matrix A.
• Definition 2: Let σ r be the calculated singular values of the matrix A ∊ C n×m , and let δ be a positive real number, δ > 0. The numerical δ-rank is defined as the number of singular values greater than δ. The numerical δ-rank is written as k, if: For each iteration of the SOAR algorithm, the contribution of the second-order Krylov vectors, taking into consideration the knowledge about the model dynamics stored in them, decreases monotonously. It is therefore expected that each generated vector r i will be less effective to the numerical rank of the transformation matrix Q n .
The suggested stopping criteria rely on the estimation of a signal of progress in the numerical rank of the generated transformation matrix with each iteration in the SOAR procedure, exactly before adding the new normalized vector. To measure this signal, an indicator is assigned for each vector generated by the iterative process before being fed to the normalization routine (line 12 in Algorithm 1). This indicator is called NRPC and has a value range of [0,1]. The greater value of the NRPC for the nominee vector harmonizes an important contribution of that vector to the improvement of the numerical rank and the dynamic progression of the original system and must hence be included in Q n . The NRPC for the candidate vector r i is given as the inverse of the sum of singular values of the current transformation matrix Q l , obtained by appending the new non-normalized vector r to the transformation matrix Q n of the previous iteration. Therefore, it is possible to approximate the stopping criteria as: • In each iteration calculate the NRPC indicator using (17).
• The iteration can be halted as soon as NRPC < ε, where ε > 0 some specified value.
Algorithm 2 demonstrates how to integrate this stopping criterion into Algorithm 1 of SOAR. IV. NUMERICAL APPLICATIONS Numerical applications were conducted to demonstrate the efficiency and accuracy of the SOAR method with the proposed stopping criterion and the robustness of the terminating mechanism for the Krylov-based reduction method. Three practical engineering examples were studied: (i) a shaft on bearing supports with a damper originating from a Finite Element (FE) [7], (ii) the butterfly gyroscope problem [8], and (iii) the FE model of the 3D Cantilever Timoshenko beam [20]. Table I displays the size of these and their corresponding reduced models and some parameter settings used in the simulation, such as the expansion point s 0 and the frequency range. The output of the relative errors between the original and the reduced systems and the related frequency responses [7,8] were considered for comparison.  Figure 3 shows the pattern of the stopping criterion and the NRPC coefficient, which is the inverse of the sum of the singular values of the current transformation matrix Q l , obtained by appending the new non-normalized vector r to the transformation matrix Q n of the previous iteration. After the 20 th iteration, NRPC decreases slowly and it can be assumed that a good reduced-order model can be selected at around 20 and above. In other terms, an acceptable good approximation model can be obtained for NRPC values in [0, 0.5]. A fixed tolerance value ε in [0, 0.5] was defined to implement a condition to select a good reduced order and stop the SOAR procedure.         V. CONCLUSION This paper introduced a new MOR approach with an efficient stopping criterion to find the suitable order of a reduced model, by using a modified SOAR algorithm as a Krylov MOR approach with auto-selection of the reduced order. Based on the obtained results, the proposed algorithm showed high efficiency and accuracy in terms of relative error against the original systems, while the key properties of the second-order form in the reduced model were still preserved. Furthermore, its superiority was proven compared to conventional SOAR in terms of robustness, where a suitable and optimal reduced-order was chosen systematically. The proposed approach worked very well for the three examples used in numerical tests (FE Model of a shaft on bearing supports with a damper, a butterfly gyroscope model, and the FE Model of a 3D Cantilever Timoshenko Beam). The key properties, such as the preservation of the second-order structure and stability were guaranteed with an automatic selection of a significantly reduced order as a size of the reduced model in the numerical simulation results.