IMA Journal of Numerical Analysis (1983) 3, 127140
Aii Iteration Scheme for Implicit RungeKutta Methods
G. J. COOPER
School of Mathematical and Physical Sciences, University of Sussex, Brighton BN\ 9QH
AND J. C. BUTCHER
Department of Computer Science, University of Auckland, Private Bag, Auckland [Received 27 April 1982 and in revised form 8 October 1982] In the implementation of implicit RungeKutta methods the computational effort is typically dominated by the cost of solving large sets of nonlinear equations. As an alternative to the usual Newton method, an iteration scheme is considered that sacrifices superlinear convergence for reduced linear algebra costs. By modifying this new scheme, using the successive overrelaxation technique, an improvement is achieved in the rate of convergence.
1. Introduction
implementation schemes for implicit RungeKutta methods which have been proposed may be divided roughly into two types. In one approach, described by Chipman (1973), a modified Newton method is used and schemes are developed to solve the resulting linear equations efficiently. Bickart (1977) gives such a scheme. This is similar to a later proposal due to Collings & Tee (1977) but both schemes may aggravate illconditioning. Another scheme of this general type, due to Butcher (1976), uses a similarity transformation of the coefficient matrix of the method and is most effective when the matrix has a single point spectrum. Each step of the Newton iteration involves two vector transformations. Suppose that an s stage method is applied to a system of n differential equations. Then each vector transformation requires O^n) operations. When the matrix of coefficients does not have a single point spectrum, Enright (1976) suggested the use of an additional similarity transformation. This is used to transform the Jacobian of the differential system to Hessenberg form. Then each of the corresponding vector transformations requires 0{s2n + sn2) operations, so that this scheme is comparatively inefficient when n > s. In addition, this scheme does not seem suitable when the coefficient matrix has a complex spectrum although Varah (1979) has proposed the use of complex arithmetic. A final proposal of this first type, due to Cash (1977), restricts consideration to RungeKutta methods of a special form which makes the system of equations effectively of lower dimension. As in the approaches of Bickart and of
THE VARIOUS 127
02724979/83/020127+14 SCO.00/0 C 1983 A endemic Pren Inc. (London) Limited
128
G
 J  CXX)PER AND J. C. BUTCHER
Collings & Tee, the necessity to compute polynomials in the Jacobian matrix is a possible cause of difficulty for this method. The other approach is to use schemes based directly on iterative procedures. Frank & Ueberhuber (1977) describe the use of iterated defect corrections. More general iteration schemes have been discussed by Butcher (1980). The two approaches have much in common. Indeed, in this article, we discuss an iteration scheme which may be regarded as a generalization of the scheme for singly implicit methods. It originated as a generalization of an iteration scheme, given by Butcher (1980), in which the Jacobian is used explicitly. The convergence rate for a linear test problem is examined, and it is shown that this may be improved by applying the successive overrelaxation technique. The relaxation parameter depends only on the coefficient matrix. 2. Motivation Consider the initialvalue problem for a system of n differential equations
x'=f{x),
x(to) = xo.
An s stage Runge
Let xr be an approximation to x(tr), tr — to+rh, r = 1, 2, 3, Kutta method gives
a
xr+1 = x r +/i ? btJXyt)
ii
where ylty2,
? ? ,y, satisfy the sn equations
y, = x,+h t
VCKJ).
' = 1.2,.. ., s.
(2.1)
We are concerned with methods suitable for stiff systems, so that the matrix of coefficients A = {ay} is not strictly lower triangular, and particularly concerned with the Gauss methods of maximum order 2s. Apart from one real eigenvalue when s is odd, eigenvalues occur as distinct complex conjugate pairs. Let Y = yv ? y2 ? . .. @ y, be an sn column vector and let X = xr ? xr ? . . . ? xr and F(Y) = f[y{) ? f{y2) ? . . . ? f{y,). Then equations (2.1) are represented by Y=X + h(A?I)F{Y) anB a12B a2iB a22B "' ?at2B Equation (2.2) may be solved by the Newton iteration but because the Jacobian of/ is expensive to evaluate it is computed only occasionally. Let J be the Jacobian evaluated at some recent point xr The modified iteration scheme evaluates (2.2) where A ? / is the direct product of A with the n x n identity matrix and, in general,
auB
A?Br
ITERATION SCHEME FOR RUNGEKUTTA METHODS
129
A1, A2, A 3 ,..., and hence Y1, Y2, 7 3 , . . . . to satisfy the equations [J/U?J]A^ = i r ^ ^
m = lf2>3,...,
(2.3)
where ZT" 1 = XYm~l+h(A ? J)F(Ym~1), m = 1, 2, 3, In each step of this iteration a set of sn linear equations has to be solved. For a singly implicit method, there is a real nonsingular matrix S such that S~1AS = XI + T = A, where T is strictly lower triangular. After applying this transformation, the iteration scheme (2.3) becomes m = 1,2,3,..., (2.4)
and, since T is strictly lower triangular, each step requires the solution of s sets of n linear equations. The transformations Lr~1*{S~1 ? I)?r~l and ?" ? (S ? I)E" each require O^n) operations. The transformation ? ? > [T ? J)?? requires 0{s2n + sn2) operations but, when n> s, this work may be reduced by expressing the iteration scheme (2.4) in the form
?.
r ^ r
CT)
m = 1 2
3
^
( 2 5 )
where B = XA'1 and L = A~lT = I—B is strictly lower triangular. In general, the transformation ?* ? (L ? /)?" requires OisPn) operations, but this may be lower in special cases. 3. The Iteration Scheme Let B and S be real nonsingular matrices and let L be a strictly lower triangular matrix and X a real constant. We consider the iteration scheme MI ? J]? = (BS ? /JiT +(L ? / ) ? , =1.2,3,..., (3.1)
where D"" 1 = X  y " " 1  !  ^ ? ^(y?" 1 ), m = 1,2,3 and assume that the sequence {7?} has limit Y. If/satisfies a Lipschitz condition on R", it follows that The aim is to examine the rate of convergence of this iteration scheme when it is applied to the linear test problem x! = qx. In particular, rapid convergence is required for all z = hq with negative real part. For this test problem, YY" = M(z)(YY"i), m = 1,2,3,...,
Y=X + h(A?I)F{Y).
and the rate of convergence depends on the spectral radius p[M(z)] of the iteration matrix M(z). This is the same as the spectral radius of M(z) = /  [ ( l  z / l ) /  L ]  1 B ( /  z / 4 ) where A = S~ AS.
1
.
(3.2)
130
G
 J COOPER AND J. C. BUTCHER
One approach, that could be considered, would be to choose A and L, B and S so as to minimize sup p[M{z)~\ where the supremum is taken over all complex numbers with negative real part. We shall, however, look at a simpler approach motivated by the special case of singly implicit methods. For a singly implicit method, S and A are chosen so that A — XI + T, where T is strictly lower triangular. Then the choices L = A~lT and B = XA~l give M = 0. Thus, for the test problem, convergence is superlinear for all z # A"1. On the other hand, convergence is superlinear only if det [/—M(z)] = 1 and hence only if <f>(z) = 0 where
It follows that superlinear convergence, on a nonempty open subset of the complex plane, can be achieved only for singly implicit methods. Instead, one might consider the problem of determining A and det B to minimize sup \4>{z)\, C~ = {z e q Re z < 0}. When s = 2 and the eigenvalues of A have positive real parts, it can be shown that this problem is solved by choosing A and det B so that
(3.4)
Here \4>(z)\ is constant on the imaginary axis and <?(0) = (f>(oo). Consider the basic iteration scheme (3.1). Since / — hXI ? J is a block diagonal matrix and L ? / is strictly lower triangular, this is reminiscent of a GaussSeidel iteration. Now (3.2) gives M{z) = [(1  zX)I  L] " l [I  L  B  z(XI  BA)] and the resemblance is made more marked by requiring that M(z) be the product of a lower triangular.matrix and a strictly upper triangular matrix. This implies that the first column of M(z) is zero and one eigenvalue is zero. When s = 2 the other eigenvalue is <f>{z) and A and det B may be chosen so that (3.4) holds. In this case, and in a rather more general case, it happens that successive overrelaxation may be used to improve the rate of convergence over much of the complex plane. The case s = 2 is also of interest for the following reason. For each s stage method of order 2s there is a matrix S such that a real block diagonal matrix. The submatrices are chosen to have the form
r a
aibii
..
except that, when s is odd, Ar = [a r ]. With this choice S = I when s = 2. Many other methods have coefficient matrices which may be transformed to real block diagonal matrices of the same form. Now suppose that L = L t ? L2 ? ? ? ? ? K
ITERATION SCHEME FOR RUNGEKUTTA METHODS
131
and B = Bi ? Bz (B .. . ? Br have the same block diagonal form. When s is odd Lr = [0] and Br = [fi\. The spectral radius is given by p[Af(z)] = max p[M,(z)],
1  1 9
( 3 5 )
,
1=1,2, ...,r.
4. The Case s = 2 Consider a method with a transformed coefficient matrix
A=
a ab~\ a,b>0, a \> a+b
Such a transformation is always possible when the coefficient matrix A has distinct eigenvalues with positive real parts. The aim is to obtain results which are applicable also when s > 2. Suppose that the iteration matrix
M{z) = [(1  zX)I  L] "x [/  L  B  z(XI  BA)]
is the product of a lower triangular matrix and a strictly upper triangular matrix. Then X and det B may be chosen so that the nonzero eigenvalue <p(z) satisfies (3.4) but this is too restrictive. Instead, suppose that a positive X # a is given and that a and det B are chosen so that = (1det It follows from (3.3) that
a = ?
(4.1)
aXb2 aX '
ldetB =
a2b2
which gives one equation for the elements of B. Since the first column of M(z) is zero, the first columns of I—L—B and XIBA must be zero. Hence there are five equations for the five unknown elements of L and B and these equations give 0 X22aX+b2 1 X22aX+b2 Xa a+b 2 a(X 2aX+b2)
L=
(4.2)
L_ X(ab) fXl+zaV _fl+zV = *(*), \alzXj \lz) ^" {alzXJ \lzj
(aXb2)2 X2{a2b2)
X(a2b2) aX 2aX a+X OL + Xz.
J
A bound for the spectral radius is obtained from (4.1) by noting that
The equation \g(z)\ = c describes a circle in the z plane and
\gffl
X2(a2b2)
132
G. J. COOPER AND J. C. BUTCHER
for all z on and exterior to the corresponding circle in the z plane. In the degenerate case g(z) = 1 for Re z = 0. Then a = X = b and ab a+b Re z < 0.
This bound may be improved when information about the eigenvalues of the Jacobian is available. Suppose that the maximum real part of the eigenvalues of J is bounded by y s$ 0. Then X should be chosen so that aX 2cd X2b2 X>b. 2X(aXb2y
(4.3)
When y < 0 the rate of convergence increases as h increases. For the method of order 4 it suffices to choose S = / so that 4 L_4 6
ll
I
6" 4 J The choice 1 = ^3/6 gives p[_M(z)~\ < 74^/3 a: O0718 and this is appreciably better than the results obtained by Butcher (1980). When hy = 2 (4.3) gives /l = (l + v /7)/12 and then 5320/7 _ _ O0283, Rez<2.
For the matrices L and B defined by (4.2) it may be shown that M(z) = UE{z)y1U{z) where 0 0 aX i+za 0 ab \zX aX 1+za a+fe 1z/l 0 0
0
?(z) =
C/(z) =
a = ?
so that M(z) may be viewed as the iteration matrix for a GaussSeidel scheme. The matrix for the corresponding successive overrelaxation scheme would be
A?(cu, z) = [/  coE(z)] ~x [(1  co)I+coU(z)2
and it is natural to try to find matrices L(co) and B(a>) so that M(co,z) = UlzX)IU,(o)y1UU.^)B{a>)z{XIB( Xa a+b a (aX)2 It happens that this is the case when 1 Ufi>) = coL, B{a>) = co
ITERATION SCHEME FOR RUNGEKUTTA METHODS
133
and then it may be shown that
CO
aX 1+za
1zX
2
M(co, z) =
(0(10?
aXl+za ab 1zX
2(aX)
fl+zay
\lzXJ J
a b of
2
2
The eigenvalues of M(co, z) are the solutions cf> =
a2b2 \lzX;
To select a relaxation parameter co A 1 consider the transformations aX 2aX
Z=  + OL + X Z.
coV
These transformations give
a+X
co1
(4.4)
where k=
(aXb2)2 X2(a2b2)'
1fz 1z
The product of the zeros of the quadratic equation (4.4) for < ? is 1 and co should be chosen so that each zero has modulus 1. This is not possible for all z but, since \g(z)\ = 1 for Re z = 0 and since g(Q) = g(ao) = 1, it is appropriate to consider the case g(z) = 1 and therefore to choose co so that co2k = 4(col), co = ?
l+y/l^k
For this value of co the solutions of the quadratic equation (4.4) are $(z) = 1 +2g(z)±2[ 5 (z) 2 g(z)] 1
When z is real and negative <? = co —1. In general, the equation \${z)\ = c describes a closed curve in the z plane and 0 < cco —1 for all z on and exterior to this curve. These curves, which are symmetric about the real axis, may be plotted by noting that <?(C2) = c describes a circle in the ? plane. Typical curves are sketched in Fig. 1. The origin is exterior to the curves and, as c decreases to 1, the curves approach the origin. The choice of X defines both k and the transformation z*z, and it is not sufficient to choose X merely so that \k\ is small. Since hi may have eigenvalues dose to the origin, it is desirable that the transformation maps the left half z plane into the left half z plane. For the case b > a, which is of principal interest here, this is
134
G. J. COOPER AND J. C. BUTCHER
FIG. 1. Rate of convergence regions for s = 2.
equivalent to the condition X^b. Subject to this inequality, \k\ is a minimum when X = b and this seems to give the best general result. If X > b a larger value of c is needed to ensure that the eigenvalues of hJ are exterior to the corresponding curve in the z plane. Consider the method of order 4 and choose X = y/3/6. Then
k = 4737,
and the spectral radius is
co = ,r
7672 + 1'
2 /<%
. .,
1  O J ^ 00173,
Although this gives a satisfactory scheme for general use, better results can be obtained for special problems. Suppose that the maximum real part of the eigenvalues of hJ is bounded by hy = 2. Then (4.3) gives X = (1 +yj7)/12 and _ 207753 k= 3 '
71fc+l
a O00698.
In this case z = 2(yjlz—\). Now consider a change of step length. Then X may be chosen so that I — hXJ is unaltered, but the resulting computational gain must be balanced against a decrease in the rate of convergence. The scope of this device is limited to cases where y ? 0. 5. Methods with s >2 The previous results may be applied to other methods. When s = 3 the method of order 2s has the matrix of coefficients _ 36 24
30
2 9 2 9 2
V1515
5 36 5 36 5 36
715
30
yn
24
ITERATION SCHEME FOR RUNGEKUTTA METHODS
135
and there is a matrix S such that a1+bl . 0 The elements of A may be obtained from at 0 0 — Al ? A2. 0 a2j
det[XIA"] = det[A//l] = X3\
and numerical evaluation gives at =s 0142342788,, bx s 0196731007, a2 ? 0215314423. The matrix S may be obtained by noting that the columns are a linearly independent set of eigenvectors of the matrix [a^—A]2 A numerical calculation gives ~ 00455241821 0140048242 . 10 00441943589 00721518521' 0139620426 0118832579 0244595668 10
Now suppose that L = L^@ L2 and B = Bl ? B2 so that the results of the previous section may be applied using (3.5). For given X
X\a\b\)
1
and successive overrelaxation may be used. On the other hand, since L2 = [0] and M2(z) = p[A?2(z)] =
A a 2 +a 2 2a2X
OL2 + X
a2+a2 and successive overrelaxation is not appropriate. Both X and a2 #  a 2 may be chosen arbitrarily but, to obtain a scheme suitable for general use, we choose X = bj and it is natural to choose <x2 = a t so that z t = z2 = z. Then
^ 1+2 2
fll+*l 02^1 02 + ^1
1Z 1+Z
fcx s 0160, Re z < 0,
1Z
136
G. J. COOPER AND J. C. BUTCHER
and the first bound is unsatisfactory. However, successive overrelaxation gives l  o ) 2; 00371745516 and
. z =
Since M(z) = My((o, z) ? M2(z) it follows that p[M(z)] ^ (lco)c in the region in the zplane corresponding to the exterior of the region in the zplane which is the union of the region ^(z) > c and the disc 1+2 1z
\k2\
If c > 148 it may be shown that this disc lies in the region <?(z) > c. For special problems, better results are obtained by increasing X. Indeed, the case X = a2 gives M2(z)  0 and , z)] = ( l 1 co a O0182834655, where z =i  221 +685z. Next, consider the case s = 4. The method of order 2s has a matrix of coefficients A = {atJ} obtained by solving the equations a y c p =  , r = l , 2 , ...,s, i = l , 2 , ...,s, r where clt. . .,c, are the zeros of p^2x— 1), the transformed Legendre polynomial of degree s. The elements of the transformed matrix
Oy
C hbi
_ _
Oy+by
°i
0 0
a2 a2 + b2
0 0
a2a:
0 0
0 0
s 00915662403, =s 0147520224, may be computed from the equation
a2 a O15843376O, b2 ^0165384116,
det (XIA) = det (XIA) = X+± A further numerical calculation gives 00637716668 00276139987 0784055901 10 00544349072 0161524607 0290017081 116467461 0231157907 00836065716 0859410259 10
OO133958955"
O
j^f
00406820191 0266775537 136433680
where the columns are eigenvectors of tayl—A]2 and [a2l—A']1.
ITERATION SCHEME FOR RUNGEKUTTA METHODS
137
The previous results may now be applied using (3.5). For given X = l*il
?+ ?
z,
= 1*2
Z2 =

l+z2 lz2 z,
a, =? a X 2 and successive overrelaxation may be used for both blocks. Again it is natural to choose <zx = a2 so that zY= z2 = z. This gives a quadratic equation for X with one positive root X =; 0167712760. For this value of X k, =0109036744, ^ 00258671885,
a2~X
but k2 =2 — 000962 so that it is not necessary to apply the relaxation technique to the second block. In this case p[M{z)~\ < (1 — ca^c in the region of the zplane corresponding to the exterior of that region in the zplane which is the union of the region \$(z)\ > c and the disc 1+z 1z where z s: — 296 + 893z. It happens that, for all c, this union is merely the region (?(z) > c. This is not satisfactory for problems where hJ has eigenvalues dose to the origin. Since bl < b2 a general purpose scheme is obtained by choosing X = bt and in this case it is necessary to apply successive overrelaxation to both blocks. For this value of X *!=* 0234032419, k2 a  0323366104, la>, = 1  w2 = a (M)525234200, cs 00699303145,
and p[M{z)] < (1 co2)c in the region of the zplane corresponding to the exterior of the region of the zplane formed by the union of the regions \$(zt)\ > c and ^(z 2 ) > c, where z~678z 1 =:476l202z 2 , and this union is the region ^(z t ) > c. The results obtained here may be applied to many other methods. However, there are important cases where the results are not applicable. In particular, the methods
138
G ??? COOPER AND J. C. BUTCHER
of order 2s have error expansions in even powers of h and extrapolation may be used by applying such a method with a step length h, followed by a double application with a step length h/2. The composite method has a matrix A with nonlinear elementary divisors, and it is not possible to obtain a transformed matrix S~1AS which is in the appropriate block diagonal form. Despite this, it seems likely that a suitable relaxation procedure can be devised. 6. Numerical Results So as to evaluate the efficiency of the iteration schemes proposed here, a number of numerical experiments were carried out and results for two initial value problems, both from Gear (1968), are reported here. The two problems differ in that the Jacobian of one of them has only real eigenvalues whereas the other has a pair of complex conjugate eigenvalues, but the performance observed is very similar and typical of other tests not specifically reported. By problem 1 we mean the system x^ =O013x 1 1000x 1 x 3> ^ = 2500x^3, x'a = O013x 1 1000x 1 x 3 2500x 2 x 3) *2(0)=l, x3(0) = 0,
for which the eigenvalues of the Jacobian, evaluated at the initial point, are 0, O0093 and 3500. By problem 2 we mean the system x7! = — 55xj +65x2— x'2 = (H)785(x1x2), A = 0lxj,
x3(0) = 0,
where the eigenvalues at the initial point are, in this case, 00062±0Oli and —55. For both problems a single step was carried out using different iteration schemes and with the modified Newton method for comparison. In each case the Jacobian evaluated at t = 0 was used and, in the results presented here, h was set at 01 for problem 1 and at h = 1 for problem 2. The initial iterate 7° for each scheme tested, including the Newton scheme, was set at x ? x ? . . . ? x where x denotes the solution value at the initial point. In practice, better initial approximations would be available using extrapolations from previous steps. By method 1 we mean the method with s = 2, p = 4 implemented according to the scheme of Section 4 with k = ,/3/6 and co = 2/(^/6 — ^/2 + 1). Actual values of the three matrices characterizing the scheme are S = I and
L=
r°
°i
\_2co O j '
B=
r
<°
<*74V3) I
GJCO2(74>/3)J
\_co(l+co)
By method 2 we mean the method with 5 = 3, p = 6 implemented according to the scheme of Section 5 with k = b^ and lco = 00371745516. Values of S, a,, b^ and
ITERATION SCHEME FOR RUNGE— KUTTA METHODS
139
a2 have been given already. The remaining matrices are
CO
— CO
0 0
0
'0 0
0 0" B= — co(l+co) 0 co+co 0 0
2co 0 0
By method 3 we mean the method with s = 4, p = 8 again implemented according to a scheme discussed in Section 5 with A = bt and relaxation parameters given by lcot = 00525234200, lco2 = 00699303145. Values of S and au bu a2 and b2 have been given already. The matrices L = L^+L2 and B = Bv ? B2 are given by 0
cor
CO,
0 0
CO,
B,=
For each method and problem, we have tabulated m = 1,2, 3 , . . . . where em=\\Y~Y~l\\ and .  denotes the uniform norm on R". For comparison we give, for each method and problem, similar information for implementations using the modified Newton method, again with the Jacobian evaluated at the initial point. The various results are given in Table 1 (for problem 1) and in Table 2 (for problem 2).
TABLE 1
the quantities em,
Results for problem 1 Method 1 OOOO 767 885 0000050 045 0000001643 0000000 040 OOOOOOOOOl OOOOOOOOOO Newton OOOO 733 143 0000000154 OOOOOOOOOO Method 2 OOOO 645 761 OOOO 183 190 0 0 0 0 004 649 0 0 0 0 0 0 0 525 0 0 0 0 000 012 OOOOOOOOOl Newton 0 0 0 0 824 623 0 0 0 0 000194 OOOOOOOOOO Method 3 0O01 035 802 0000 344017 0000 022 489 0000 002176 0000000157 0000 000 006 Newton 0000 864 811 0000 000 214 OOOOOOOOOO
140
G. 1. COOPER AND J. C. BUTCHER TABLE 2
Results for problem 2
Method 1 ?! e2 e3 e4 cs e6 0212 526132 0014 311557 0000 326 473 0000006 091 0000 000155 0000000004 Newton e1 e2 e, 0202 439 473 0000 334 034 0000000614
Method 2 0207 289 459 0023 499 338 0001914 042 0000159 978 0000 005 939 0000000178 Newton 0196 464 340 0000 354 808 0000000719
Method 3 0513 753 077 0352 325163 0053 901025 0003 919 662 0000 096 881 0000009664 Newton 0211935 632 0000 421970 0000000886
Even though the modified Newton methods converge much more quickly, the new methods have an advantage, for high dimensional problems, in terms of an operation count of a factor of s~3 for the LU factorizations and a factor of s" 1 for the back substitutions. This gives a strong advantage to the new methods. This research was made possible by a grant from the Science and Engineering Research Council. The authors gratefully acknowledge this support and the helpful comments of the referees.
REFERENCES
T. A. 1977 An efficient solution process for implicit RungeKutta methods. SI A M J. num. Analysis 14, 10221027. BUTCHER, J. C. 1976 On the implementation of implicit RungeKutta methods. BIT 16, 237240 BUTCHER, J. C. 1980 Some implementation schemes for implicit RungeKutta methods. Proceedings of the Dundee Conference on Numerical Analysis, 1979. Lecture Notes in Mathematics 773, 1224. CASH, J. R. 1977 On a class of implicit RungeKutta procedures. / . Inst. Maths Applies 19, 455^70. CHTPMAN, T. H. 1973 The implementation of RungeKutta implicit processes. BIT 13, 391393. COLLINGS, A. G. & TEE, G. J. 1977 An analysis of Euler and implicit RungeKutta numerical integration schemes for structural dynamic problems. Proceedings of the Sixth Australasian Conference on the Mechanics of Structures and Materials, Vol. 1, pp. 147154. ENRJGHT, W. H. 1976 Improving the efficiency of matrix operations in the numerical solution of ODE's. Technical Report 98, Computer Science Dept, University of Toronto. FRANK, R. & UEBERHUBER, C. W. 1977 Iterated defect correction for the efficient solution of stiff systems of ordinary differential equations. BIT \1, 146159. GEAR, C. W. 1968 The automatic integration of stiff ordinary differential equations. Proc. IFIP Congress, pp. 187193. VARAH, J. M. 1979 On the efficient implementation of implicit RungeKutta methods. Maths Comput. 33, 557561.
BICKART,