当前位置:首页 >> >>

High order explicit two-step Runge-Kutta methods for parallel computers


High Order Explicit Two-Step Runge-Kutta Methods for Parallel Computers
H. Podhaisky, R. Weiner, J. Wensch August 10, 1999
Martin{Luther{Universitat Halle-Wittenberg, Inst. f. Num. Math. Theodor-Lieser-Stra e 5, D-06120 Halle, Germany e-mail: fpodhaisky,weiner,wenschg@mathematik.uni-halle.de In this paper we study a class of explicit pseudo two-step Runge-Kutta methods (EPTRK methods) with additional weights v. These methods are especially designed for parallel computers. We study s-stage methods with local stage order s and local step order s + 2 and derive a su cient condition for global convergence order s +2 for xed step sizes. Numerical experiments with 4- and 5-stage methods show the in uence of this superconvergence condition. However, in general it is not possible to employ the new introduced weights to improve the stability of high order methods. We show, for any given s-stage method with extended weights which full lls the simplifying conditions B (s) and C (s ? 1), the existence of a reduced method with a simple weight vector which has the same linear stability behaviour and the same order.

Abstract

Key words: Runge-Kutta methods, parallelism, two-step methods, superconvergence, linear stability

AMS(MOS) subject classi cation (1991): 65M12, 65M20

1 Introduction
For the numerical solution of systems of rst-order ordinary di erential equations (ODEs)

y0 = f (t; y); y(t0 ) = y0 ; y; f 2 Rn ;

(1)

Fei 5], Cong 2] and Cong et al. 3] have recently investigated a class of parallel explicit pseudo two-step Runge-Kutta methods (EPTRK methods). EPTRK methods compute an approximation ym y(tm ) by the s-stage scheme

Ym = 1l ym + hm (A I )F (tm?1 1l + hm?1 c; Ym?1 ); ym+1 = ym + hm (bT I )F (tm 1l + hm c; Ym )

(2a) (2b)

with s (external) stage approximations Ym 2 Rn s , parameters A = (aij ) 2 Rs s , c; b 2 Rs , 1l := (1; : : : ; 1)T and step size hm . F denotes the straightforward extension of f to Rn s . Here, denotes the Kronecker tensor product. Notice that the EPTRK method (2) requires only one sequential function evaluation per step on a parallel computer with s processing elements. Numerical 1

experiments with a variable step size implementation on a shared memory computer have shown that EPTRK methods perform well for non sti ( 3]) and for mildly sti ( 9]) problems. A more general class of two-step RK methods was introduced by Jackiewicz et al. (see 8] and 1] and the references therein). Although Jackiewicz's methods are constructed not with respect to a parallel implementation it is attractive to borrow the idea of introducing a new weight vector v of s parameters and consider parallel EPTRK methods of the form

Ym = 1l ym + hm (A I )F (tm?1 1l + hm?1 c; Ym?1 ); (3a) T I )F (t 1l + h c; Y ) + h (vT I )F (t ym+1 = ym + hm (b (3b) m m m m m?1 1l + hm?1 c; Ym?1 ): The additional e ort to compute ym+1 in (3b) is small and the parallelization of the method is not a ected. The aim of this paper is to study the e ects of the new parameters v with respect to order
and stability.

2 Superconvergence
0

substituting exact solutions y(tm ) and Y (tm ) := (y(tm + c1 hm ); : : : ; y(tm + cs hm )) of(1), i.e. 4m;0 = Y (tm ) ? 1l y(tm) ? hm (A I )Y 0(tm?1 ); (4a) T I )Y 0 (t )) + h (vT I )Y 0 (t 4m = y(tm+1 ) ? hm (b (4b) m m m?1 )) : An EPTRK method (3) is of local step order p if 4m = O(hp+1 ) and of local stage order q if m 4m;0 = O(hq+1 ). m A Taylor series expansion of y(t) in (4) shows that an EPTRK method is of local stage order q and local step order p if the simplifying conditions C (q) and B (p) de ned by
3 q 2 c; c2 ; 2 c3 ; : : : ; q?1 cq = A 1l; c ? 1l; (c ? 1l)2 ; : : : ; (c ? 1l)q?1 ;

De nition 2.1. Let 4m; and 4m denote the residuals of the integration scheme (3) obtained by

C (q)

1; 2 ; 3 ; : : : ; p

2

p?1

= bT 1l; c; 2 c2 ; : : : ; p?1 cp?1 + vT 1l; (c ? 1l); (c ? 1l)2 ; : : : ; (c ? 1l)p?1

cs+1 (bT + vT ) s + 1 ? A(c ? 1)s = 0: Then with starting values of order s + 2 the method will converge with order p = s + 2.
2

are satis ed. Here = hm =hm?1 denotes the step size ratio. By standard techniques it can be shown, that a method satisfying C (s) and B (s + 1) converges with order p = s + 1, e.g. 10] for the case v = 0. One can nd methods which ful ll C (s + 1), however the c- vector will include large components and the error constants will be large. On the other hand, it is possible to satisfy B (l) with l > s + 1. However this will not increase the order of convergence in general. We will show that, for general c with an additional condition on b and v, the order of convergence is p = s + 2. We restrict here for simplicity to the case of a constant step size. Theorem 2.1 (Superconvergence of EPTRK methods). Let C (s) and B (s + 2) be satis ed. Let further hold the condition (5)

B (p)

Proof. Substituting the exact solution into (3) yields

Y (tm ) = 1l y(tm ) + h(A I )F (tm?1 1l + hc; Y (tm?1 )) + 40;m (6) y(tm+1 ) = y(tm ) + h(bT I )F (tm 1l + hc; Y (tm )) + h(vT I )F (tm?1 1l + hc; Y (tm?1 )) + 4m: With C (s) and B (s + 2) by Taylor expansion follows s+1 s+1 40;m = hs! ( sc + 1 ? A(c ? 1)s ) y(s+1) (tm) + O(hs+2 ); 4m = O(hs+3 ) :
Subtracting (3) from (6) yields a recursion for the global error

errm+1 = y(tm+1 ) ? ym+1 : With the standard convergence result Y (tm ) ? Ym = O(hs+2 ) for C (s) and B (s + 1) we get by the mean value theorem and expanding the Jacobians at (tm ; y(tm )) errm+1 = (I + O(h))errm + O(h)errm?1 + s+2 cs+1 + (bT + vT )( s + 1 ? A(c ? 1)s ) h s! fy (tm ; y(tm ))y(s+1) (tm ) + O(hs+3 ):
With (5) follows

kerrm k (1 + d h)kerrm k + d hkerrm? k + d hs ; d ; d d > 0: We can bound kerrm k by the solution rm of
+1 1 2 1 3 +3 1 2 3

rm+1 = (1 + d1 h)rm + d2 hrm?1 + d3 hs+3 with r0 = kerr0 k = d4 hs+2 , r1 = kerr1 k = d5 hs+2 , d4 ; d5 > 0, giving nally errm = O(hs+2 ).

3 Numerical Illustration of the Order Results
To discuss the impact of condition (5) for EPTRK methods we construct di erent sets of coe cients with 4- and 5-stages and apply the resulting methods to three (non sti ) test problems. We start with s = 4 stages. For any choice of knots ci a method with v = 0 is uniquely determined by B (4) and C (4). The remaining 4 degrees of freedom can be used to satisfy B (8) with cvector taken as Gaussian collocation points, as done for method gauss4 in Table 1. Here, Es+1 := A(c ? 1)s ? cs+1 =(s + 1) denotes the local stage error and es+1 := (bT + vT )Es+1 denotes the residual of condition (5) in Theorem 2.1. Since es+1 does not vanish the global convergence order is only 5. Global order 6 can be achieved by satisfying C (s), B (s +2) and (5) for given c by computing v or for v = 0 by a special choice of c. To attain global order 6 with the same points ci we choose v as a solution of es+1 = 0. But B (6) depends on v and has to be satis ed, too. With the help of Maple V we calculated a solution v = 0:0; ?0:006332901980013884; ?0:319483842974888; 0:06964740132900621]. for this method vgauss4, see Table 1. 3

Construction of 4-stage Methods

Although it is naturally to guarantee condition (5) for arbitrarily chosen knots ci by introducing a suitable v vector, it is not necessary. By choosing special points c one can nd EPTRK methods with v = 0 which satisfy (5), B (s + 2) as we did for the method n4 with

c = 0:1493506562434243; 0:6535456428480576; 1:123; 1:6391116441727] :

Construction of 5-stage Methods

The method cong5 was proposed in 3]. Due to the small error coe cients this method performs well for non sti problems. Since es+1 6= 0 the global order is p = 6 only, see Table 1. The method cong5 has the knots

c = 0:08858795951270395; 0:4094668644407347; 0:7876594617608471; 1; 1:409466864440735] : (7)
With the choice v = 0; 0; 0; 0; ?0:01842446247125309] and the c-vector (7) the method ful lls (5), B (s + 2) and has global order p = 7. The method n5 has been constructed with v = 0 and
vcong5

c = 0:1365941578442505; 0:625; 1:230436842527931; 1:5; 1:6911642569218] :
These values guarantee (5) and B (s + 2) and give a small leading stage error coe cient kEs+1 k . name Table 1: EPTRK methods conditions order p B(8), C(4) 5 B(6), C(4) 6 B(6), C(4) 6 B(7), C(5) 6 B(7), C(5) 7 B(7), C(5) 7

4-stage 5-stage

gauss4 vgauss4 n4 cong5 vcong5 n5

kEs k
1.051 1.051 2.334 2.670 2.670 2.385

+1 2

0.2952 0 0 0.0475 0 0

es+1

We consider the following test problems: NOFE { a nonlinear problem proposed by Fehlberg 4].
0 y1 = 2ty1 log(max(y2 ; 0:001)) 0 y2 = ?2ty2 log(max(y1 ; 0:001)) t 2 0; 5]

Test Problems

(8a) (8b) (8c)

with initial values taken from the exact solution y1 (t) = exp(sin(t2 )); y2 (t) = exp(cos(t2 )). PROTH { a (non sti ) Prothero-Robinson problem, see 7].

y0 = (y ? sin(t)) + cos(t); t 2 0; 10];
with exact solution y(t) = sin(t). 4

= 0:1;

(9)

ORBIT { a two-body orbit problem, see 6].

2 2 r = y1 + y2 , with exact solution y(t) = cos(t); sin(t); ? sin(t); cos(t)].

p

0 0 y1 = y 3 ; y 2 = y4 0 0 y3 = ?y1 =r3 ; y4 = ?y2=r3 ;

t 2 0; 10];

(10a) (10b)

We have implemented the EPTRK methods of Table 1 in FORTRAN using double precision and applied them to the test problems. (The code for the method cong5 can be obtained from our web site http://www.mathematik.uni-halle.de/institute/numerik/software.) In the gures below we have plotted the number of steps versus the error ERR in the endpoint of the integration, de ned by

v n u 1 X y ? y (t u i i ERR = t
n i=1

) 1 + jyi (tend )j
end

2

:

Notice, from ERR Chp follows log(ERR ) log(C ) ? p log(h?1 ) and therefore the order of an EPTRK method can be estimated by the slope of the lines in the gures. The order 6 of the 4-stage methods n4 and vgauss4 compared with the order 5 of the method gauss4 leads to a better accuracy for all step sizes. Due to smaller error constants in the step formula (3b) (not shown in Table 1) the method n4 with v = 0 performs slightly better as vgauss4 in particular in example PROTH, Fig. 2. The fact, that cong5 is of order 6 only can be seen for su cient small step sizes, i.e. the slope of the lines in the gures is smaller than the slopes of the (nearly parallel) lines of vcong5 and n5. The best overall performance gives the method n5 with v = 0 and small error constants.
-2 -4 -6 -8 -10 -12 100 1000 number of steps "gauss4" "vgauss4" "n4" log(Err) -2 -4 -6 -8 -10 -12 100 1000 number of steps "cong5" "vcong5" "n5"

log(Err)

Figure 1: Results for problem NOFE (8)

4 Stability
The stability of EPTRK methods with v = 0 has been investigated in 9]. The adaption of the concepts there to the case v 6= 0 is straightforward. Here, we start with a short review of the notations: 5

0 -2 -4 log(Err) -6 -8 -10 -12 -14 10 number of steps 100 "gauss4" "n4" "vgauss4" log(Err)

-4 -5 -6 -7 -8 -9 -10 -11 -12 -13 -14 10 number of steps 100 "cong5" "vcong5" "n5"

Figure 2: Results for problem PROTH (9)
-2 -4 -6 -8 -10 -12 100 number of steps 1000 "gauss4" "vgauss4" "n4" log(Err) -2 -4 -6 -8 -10 -12 100 number of steps 1000 "cong5" "vcong5" "n5"

log(Err)

Figure 3: Results for problem ORBIT (10) We apply the EPTRK methods to the linear test equation y0 = y; Re size h. Then the solution ful lls the recursion (Ym ; ym+1 )T = M (z )(Ym?1 ; ym )T ; z = h ; with the ampli cation matrix 0 with constant step (11)

zA M (z) = z2 bTA + zvT 1 + 1l T 1l : (12) zb We de ne the stability region S of an EPTRK method by S := fz : %(M (z)) 1g and have %(M (z )) < 1 if z is an inner point of S and hence (Ym ; ym+1 ) ! 0 for m ! 1. We obtain %(M (z )) by computing the zeros of the stability polynomial %(x; z ) := det(xI ? M (z )). But surprisingly, the new parameters v do not give us any new degree of freedom in the corresponding stability polynomial. We have the following theorem: Theorem 4.1. For every method (3) with C (s ? 1) and B (s) there exists a corresponding method with vT = 0, satisfying C (s ? 1), B (s) and having the same stability polynomial. 6

Proof. The proof is given in the appendix of this report.

A consequence of Theorem 4.1 is that it is not possible to improve the stability of EPTRK methods with a choice v 6= 0 for high order methods. However, a nontrivial choice of the v-vector may be useful to conserve the stability for variable step sizes.

5 Conclusion
We have proposed a superconvergence condition (5) in Theorem 2.1 which allows the construction of s-stage EPTRK methods with stage order s and convergence order s +2. To satisfy this condition with a given set of knots ci we have introduced new parameters v into the integration scheme (3). Numerical tests with xed step size and 4- and 5-stage methods have illustrated the convergence result. Though it has been possible to nd special knots ci which guarantee condition (5) for constant step size with v = 0, for variable step sizes we are forced to choose v 6= 0. Equation (5) becomes s+1 cs+1 s T (13) bT s + 1 m+1 ? m A( m )(c ? 1)s + vm sc + 1 ? 1 A( m?1 )(c ? 1)s = 0 m m?1 with step size ratios m := hm =hm?1 and m -dependent RK matrix A = A( m ) and condition (5) can not longer be satis ed with v = 0. Since with v 6= 0 the error behavior is more smooth we hope to obtain a more reliable error estimation and step size control in this case. This generalization to variable step sizes will be a topic of further work. In Theorem 4.1 we have shown that it is not possible to enlarge the stability regions of the EPTRK methods with B (s) and C (s ? 1) with the new parameters v of the integration scheme. However, the construction in 9] of a EPTRK method based on a given stability polynomial can be simpli ed with v 6= 0.

References
1] Z. Bartoszewski and Z. Jackiewicz, Construction of two-step Runge-Kutta methods of high order for ordinary di erential equations, Numerical Algorithms 18 (1998), 51{71. 2] N.H. Cong, A general family of pseudo two-step Runge-Kutta methods, submitted for publication, 1997. 3] N.H. Cong, H. Podhaisky, and R. Weiner, Numerical experiments with some explicit pseudo two-step RK methods on a shared memory computer, Computers and Mathematics with Applications 36 (1998), no. 2, 107{116. 4] E. Fehlberg, Klassische Runge-Kutta Formeln 5. und 7. Ordnung mit Schrittweitenkontrolle, Computing 4 (1969), 61{71. 5] J.G. Fei, A class of parallel explicit Runge-Kutta formulas, Chinese Journal of Numerical Mathematics and Applications 16 (1994), 23{36. 6] E. Hairer, S.P. N rsett, and G. Wanner, Solving Ordinary Di erential Equations I, Springer, 1993. 7] E. Hairer and G. Wanner, Solving Ordinary Di erential Equations II, Springer, 1996. 7

8] Z. Jackiewicz and S. Tracogna, A General Class of Two-Step Runge-Kutta Methods for Ordinary Di erential Equations, SIAM J. Numer. Anal. 32 (1995), 1390{1427. 9] H. Podhaisky and R. Weiner, A Class of Explicit Two-Step Runge-Kutta Methods with Enlarged Stability Regions for Parallel Computers, Parallel Computation (P. Zinterhofer, M. Vajtersic, and Andreas Uhl, eds.), Lecture Notes in Computer Science, 1557, Springer, 1999, pp. 68{77. 10] Helmut Podhaisky, Parallele explizite Zweischritt Runge-Kutta-Methoden, Diplom-Thesis, Universitat Halle, 1998.

8

6 Appendix
We will prove Theorem 4.1 in the following subsections of this appendix.

6.1 The conditions C (s ? 1) and B (s)

Given a method (3), we start with expressing the conditions C (s ? 1) and B (s) in a convenient way for prescribed nodes ci . The i-th unit column vector is denoted by ei . Further we use the following matrices

V0 :=(cij?1 )i;j=1;:::;s V1 :=((ci ? 1)j?1 )i;j=1;:::;s 1 S := diag(1; 2 ; 1 ; : : : ; 1 ) 3 s

00 B B N := B . B .. @
R=
i=1

?1 Q :=(qij )i;j=1;:::;s; qij = j ? 1 i

s X

1 0 ... ... C C 2 Rs s C . . . 1C A 0 ::: 0 (N T SQ)i?1 e1 eT i

1

W =V0 R:
Note that V0 = V1 Q. Lemma 6.1. The matrix R is a regular, upper triangular matrix. The matrix W is regular. Proof. SQ is a regular upper triangular matrix, so N T SQ is a Hessenberg matrix with full lower diagonal. Obviously, the vector (N T SQ)i?1 e1 has zeros below the i-th component and a nonvanishing i-th component. Hence R is a regular upper triangular matrix and W = V0 R is regular.

Lemma 6.2. The matrix A satis es condition C (s ? 1) if and only if W ? AW = K with

00 B1 B K = B0 B. B .. @

0

0 0 1 ...

: : : 0 p0 : : : 0 p1 C C : : : 0 p2 C ; C C
... 0 . . A . . . . 1 ps?1

1

1

(14)

where the pi are arbitary. Proof. We write the condition C (s ? 1) in the form

0 =(AV1 ? V0 N T S )ek 9

for k = 1; : : : ; s ? 1:

Because Q and R are regular upper triangular matrices, this is equivalent to 0 =(AV1 ? V0 N T S )QRek for k = 1; : : : ; s ? 1 =AV0 Rek ? V0 N T SQ(N T SQ)k?1 e1 =AWek ? V0 Rek+1 =AWek ? Wek+1 : Thus K has the structure above with arbitrary last column if and only if A satis es C (s ? 1). Next we express bT using the B (s) condition and vT . Using the matrices de ned above, B (s) is given by

bT V0 + vT V1 =1lT S , bT W = ? vT V0Q?1R + 1lT SR:
First we transform via

(15) (16)

6.2 Simpli cation of the stability matrix
I I M2 (z) := ?zbT 0 M (z) zbT 0 1 1 T ) 1l b = z (AT+ 1lvT ) 1 z(b +

to eliminate the z 2 expressions in M (z ). Next we use the matrix W to obtain
?1 0 0 M3 (z) := W0 1 M2(z) W 1 0 T W ) e1 + = zz((bKW e1 b T W ) 1 : T +v

Note that W ?11l = e1 . We get the stability polynomial:

e1 T ? %(z; x) = det(M3 (z) ? xI ) = det z(K(b+ W b+WT)W )xI 1 e1 x : z T v ?
We expand the determinant by the last column. In the adjunct to the upper right 1 (from e1 ) we exchange the last row s ? 1 times with the row above to move it to the rst row:

%(z; x) =(?1)s+2 (?1)s?1 det (I ? e1 eT )(zK ? xI ) + e1 z(bT W + vT W )] 1 + (1 ? x) det(z (K + e1 bT W ) ? xI ):
Note that the factor (?1)s+2 is the sign of the adjunct in the expansion formula (1 + s + 1) but the factor (?1)s?1 results from the row exchanges in the adjunct. We split the rst row of the second determinant to get

%(z; x) = ? det (I ? e1 eT )(zK ? xI ) + e1 xzbT W + e1 zvT W )] 1 + (1 ? x) det(zK ? xI ):
10

The matrix K is the companion matrix of the polynomial

6.3 Computation of the stability polynomial
p( ) := s ? ps?1 s?1 ?

?p ?p :
1 0

(17) (18)

Thus

To compute the remaining part of the stability polynomial
1 1 1

det(zK ? xI ) =(?z )s det(x=zI ? K ) = (?z )s p x z

? det (I ? e eT )(zK ? xI ) + e xzbT W + e zvT W )]
1

we expand the determinant by the rst row. Suppose we delete the rst row and the i-th column:

0 B z B B B B B B B B B @

?x
z

?x
z

?x
z

?x
z

zps?2 zps?1 ? x

.. .

1 C C C C C: C C C C C A

(19)

The determinant of the upper left block is z i?1 . The lower right block is of the same form as ? zK ? xI but of dimension s ? i. Its determinant is (see (18)) (?z)s?i pi x where pi( ) is the z polynomial

pi( ) = s?i ? ps?1 s?i?1 ?
Hence the second determinant is
s X i=1

? pi

+1

? pi :

(?1)i+1 z (xbT + vT )Wei z i?1 (?z )s?i pi x z

and the stability polynomial is given by

%(z; x) =(1 ? x)(?z)s p x z s X T T + (?z )s (xb + v )Wei pi x : z i=1

Now we substitute (16) to eliminate bT

%(z; x) =(?z)s p

s x + X(vT We )pi x ] i z i=1 z s s p x + X(?1lT SRe + vT V Q?1 Re )pi x ]: ? x(?z) z i 0 i z i=1

11

We are prepared for the nal step: A method satisfying C (s ? 1) and B (s) is uniquely determined (for xed nodes ci ) by the last column p of K and the weights vT . Our claim ist that for any parameters (p; vT ) there exist parameters (~; 0) that give the same stability function. p We choose

p( ) =p( ) + ~
and show

s X i=1

(vT Wei )pi ( ) (?1lT SRei + vT V0 Q?1 Rei )pi ( ):

(20)

p( ) + ~

s X i=1

(?1lT SRei )~i ( ) =p( ) + p

s X i=1

(21)

We write the polynomials p; pi using the column vectors p :=(?p0 ; : : : ; ?ps?1; 1)T ; = (1; ; : : : ; s )T : ) p( ) = T p pi ( ) = T N i p This allows us to express the polynomials p, pi in a convenient way: ~ ~

2 s 3 X T p( ) = T 4p + (v Wej )N j p5 ~ j 2 3 s X T pi( ) = T N ip = T 4N ip + (v V Rej )N i j p5 : ~ ~
=1

j =1

0

+

Substituting the expressions for p, pi our assertion becomes ~ ~
T p+ s X i=1

(vT V0 Rei ) T N i p +
s X i=1

s X i=1

3 2 s X T (?1lT SRei ) 4 T N i p + (v V Rej ) T N i j p5
s X i=1 j =1
0 +

= Tp + which simpli es to

(?1lT SRei ) T N i p +

(vT V0 Q?1 Rei ) T N i p

We

s s T V0 Rei ) T N i p + X(?1lT SRei ) X(vT V0 Rej ) T N i+j p (v j =1 i=1 i=1 s X = (vT V0 Q?1 Rei ) T N i p: i=1 compare the coe cients of T N i p for i = 1; : : : ; s ? 1 (note that N s = 0) and s X

nd:

vT V0 (I ? Q?1 )Rei =

i? X
1

j =1

vT V0 Rej (1lT SRei?j ):

12

This identity is veri ed in Lemma 6.3 below. This completes the proof. Note that we did not use the fact that the simpli ed method is based on the same nodes. The stability function of the simpli ed method is completely determined by the vector p. This implies ~ that we can choose arbitrary ci for the method with vT = 0. Lemma 6.3. For i = 1; : : : ; s ? 1 holds (I ? Q?1 )Rei =
i? X
1

j =1

(1lT SRei?j )Rej :

Proof. We prove the lemma by induction: For i = 1 the assumption is valid because of Re1 = e1 , Q?1 e1 = e1 . For i = k + 1, 1 k s ? 2 we have to prove
k j =1

X 0 =(I ? Q?1 )Rek+1 ? (1lT SRek+1?j )Rej :

Substituting Rek+1 = N T SQRek and shifting the summation index yields
k?1 ?1 )N T SQRek ? X(1lT SRek?j )Rej +1 : 0 =(I ? Q j =0

We multiply the induction hypothesis (set i = k) with N T SQ and subtract it: 0 = ? Q?1 N T SQRek + N T SRek ? (1lT SRek )e1 = ? Q?1 (N T SQ ? QN T S + e1 1lT S )Rek The last component of Rek vanishes since R is a upper triangular matrix and k s ? 2. We show further that the rst s ? 1 columns of N T SQ ? QN T S + e1 1lT S ] vanish: The rst row is 0 ? 1lT N T S + 1lT S = eT S , i.e., the rst s ? 1 entries in the rst row vanish. For s 2 i s, 1 j s ? 1 we get (N T SQ)ij =(SQ)i?1;j 1 ?1 =i ? 1 j ? 2 i (j 1)! = (i ? 1)!(?? i + 1)! j (QN T S )ij =(QN T )ij 1 j 1 =Q

j = i?1 1 j T SQ) : =(N ij So the rst s ? 1 columns of N T SQ ? QN T S + e1 1lT S ] vanish and hence the product N T SQ ? QN T S + e1 1lT S ]Rek vanishes.
13

i;j +1 j


赞助商链接
相关文章:
更多相关文章: