You are on page 1of 11

ISSN 09655425, Computational Mathematics and Mathematical Physics, 2012, Vol. 52, No. 10, pp. 13731383.

Pleiades Publishing, Ltd., 2012.


Original Russian Text L.M. Skvortsov, 2012, published in Zhurnal Vychislitelnoi Matematiki i Matematicheskoi Fiziki, 2012, Vol. 52, No. 10, pp. 18011811.

RungeKutta Collocation Methods for DifferentialAlgebraic


Equations of Indices 2 and 3
L. M. Skvortsov
Bauman State Technical University, Vtoraya Baumanskaya ul. 5, Moscow, 105005 Russia
email: lm_skvo@rambler.ru
Received September 13, 2011

AbstractStiffly accurate RungeKutta collocation methods with explicit first stage are examined.
The parameters of these methods are chosen so as to minimize the errors in the solutions to differen
tialalgebraic equations of indices 2 and 3. This construction results in methods for solving such equa
tions that are superior to the available RungeKutta methods.
DOI: 10.1134/S0965542512100119
Keywords: differentialalgebraic equations, differential index, implicit RungeKutta methods, order
reduction phenomenon.

1. INTRODUCTION
Consider a system of differentialalgebraic equations (DAEs) given in the semiimplicit form:
y' = f ( y, z ) , y (t 0 ) = y 0, (1.1a)
0 = g ( y, z ) , z (t 0 ) = z 0. (1.1b)
We assume that initial values are consistent (see [1]). A nonautonomous system can always be brought to
form (1.1) by adding the equation t ' = 1. Assume that the functions f and g are as smooth as required and
the vector z is of the same dimension as the vector function g. According to the definition in [1], the dif
ferential index of system (1.1) is the minimal number of analytical differentiations necessary to formulate
these equations as an explicit differential system. Using the notation g y = g ( y, z ) y and g z = g ( y, z ) z
and differentiating (1.1b), we obtain 0 = g y f ( y, z ) + g z z '. If the matrix g z is invertible in a neighborhood of
the solution, then z ' = g z 1g y f ( y, z ) and system (1.1) is of differential index 1. Systems of larger indices
(2 and higher) have a singular matrix g z and are most difficult to solve numerically. In this case, the alge
braic subsystem (1.1b) cannot be solved with respect to z; consequently, the algebraic and differential
equations are solved simultaneously by using an implicit method. Many dynamic objects can be described
by equations of higher indices; most commonly, these are DAEs of indices 2 and 3. For instance, mechan
ical systems with constraints are described by equations of index 3 (see [1, 2]).
In a system of index 1, the vector z can be eliminated by solving (analytically or numerically) Eq. (1.1b)
for z and substituting the resulting solution to (1.1a). Systems of higher indices have algebraic variables
that cannot be eliminated in this manner. If the matrix g z is singular and preserves a permanent rank, then,
by eliminating certain algebraic variables, Eqs. (1.1) can be reduced to the form
y' = f ( y, z ) , y (t 0 ) = y 0, (1.2a)
0 = g (y) , z (t 0 ) = z 0 . (1.2b)
(The details can be found in [1]). The differentiation of (1.2b) yields
0 = g y f ( y, z ) . (1.3)
If g y f z is an invertible matrix, then Eqs. (1.2a) and (1.3) constitute a system of index 1 and an expres
sion for z ' can be obtained by differentiating (1.3). Thus, if the matrix g y f z is invertible in a neighborhood
of the solution, then (1.2) is a system of index 2. In this case, the initial values are consistent if they satisfy
Eqs. (1.2b) and (1.3).

1373
1374 SKVORTSOV

Equations of index 3 are usually represented in the form


y ' = f ( y, z ) , y (t 0 ) = y 0 , (1.4a)
z ' = k ( y, z, u ) , z ( t 0 ) = z 0, (1.4b)
0 = g ( y) , u (t 0 ) = u 0 . (1.4c)
It is assumed that the matrix g y f z k u is invertible in a neighborhood of the solution. For this system, the
initial values are consistent if they satisfy Eq. (1.4c) and the equations obtained by differentiating (1.4c)
once and twice. Representations (1.2) and (1.4) make it possible to separate the variables according to
their different convergent rates in the numerical solution. There are variables of index 1 (ycomponent),
variables of index 2 (zcomponent), and variables of index 3 (ucomponent).
If problem (1.1) is solved by an implicit RungeKutta method, then each step is described by the for
mulas
s s

y n+1 = y n + h bi Yi', Yi = y n + h a Y ', ij j Yi' = f ( Yi , Z i ) ,


i =1 j =1
s s

b Z' ,
z n+1 = z n + h i i Zi = z n + h a Z' , ij j 0 = g ( Yi , Z i ) , i = 1,2, , s,
i =1 j =1

which specify a system of algebraic equations. The coefficients aij , bi (i, j = 1,2, , s ) can be conveniently
exhibited using the Butcher table
c1 a11  a1s
   c A
= T.
cs as1  ass b
b1  bs
Here, c = A e , where e = [1, , 1] .
T

It is quite possible that, in the process of solving DAEs of higher indices, the realistic order of a method
is significantly lower than its classical order, which completely reveals the order reduction phenomenon.
The basic results on the convergence of RungeKutta methods as applied to solving DAEs can be found
in [1, 35]. Often, the convergence of the ycomponent can be improved by using stiffly accurate meth
ods; however, for the other components, the order of convergence remains low. Usually, for the zcompo
nent, the order does not exceed the stage order q, while, for the ucomponent, the order is at most q 1.
It follows that the diagonally implicit RungeKutta methods (DIRK methods), which are most simple to
implement, are scarcely appropriate for solving DAEs of higher indices because, for these methods, q 2.
Within this class, the singly diagonally implicit (SDIRK) methods given in [1, 6, 7] are totally unsuitable
for solving equations of index 3, while the ESDIRK methods (see [812]), which have an explicit first
stage, ensure only the linear convergence for the ucomponent. For DAEs of index 3, the author was able
to attain the quadratic convergence for the ESDIRK methods by introducing additional conditions
(see [13, 14]). However, these methods are only efficient if the accuracy requirements are low. Moreover,
different convergent rates for different components make an automatic choice of the integration step more
difficult.
Thus, to efficiently solve DAEs of higher indices, one should use stiffly accurate methods of sufficiently
high stage order. Conditions for the stage order q have the form
Ac k 1 = c k k , b Tc k 1 = 1 k , k = 1,2, , q . (1.5)
(Hereinafter, raising a vector to a power means componentwise raising to this power.) For a prescribed
number of stages s, the maximal stage order q = s is attained by collocation methods whose coefficients
are uniquely determined by the node vector c given by relations (1.5).
For stiffly accurate collocation methods with an explicit first stage, the first row of the matrix A con
tains only zeroes, while the last row is equal to b , which implies that c1 = 0 and c s = 1. For these methods,
T

the first stage does not require any calculations because it coincides with the last stage of the preceding
step. Among methods of this type, Lobatto IIIA methods have the maximal classical order 2 s 2 . How
ever, their stability function R ( z ) is such that R ( ) = 1; consequently, Lobatto IIIA methods are not too
good for solving stiff and differentialalgebraic equations.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


RUNGEKUTTA COLLOCATION METHODS 1375

Stiffly accurate collocation methods with an explicit first stage, called SAFERK methods (Stiffly
Accurate, First Explicit, RungeKutta), were examined in [15, 16]. For a prescribed number of stages s,
they have the free parameter c s 1, while the other nodes are chosen so as to ensure the maximal order. For
cs*1 < cs 1 < 1, we have 0 < R ( ) < 1; here, c s*1 is the value of c s 1 for the Lobatto IIIA method. For a num
ber of stiff problems, it was shown in [15] that, with the same number of implicit stages, these methods
outperform in terms of accuracy the RADAU IIA and Lobatto IIIA methods.
We have studied SAFERK methods with the aim to find the value of c s 1 optimal for DAEs of higher
indices. It turned out that, for the best of these methods, c s 1 is close to 1. (In [15], values of c s 1 close to
c *s 1 were used for stiff problems.) As a result of a more detailed analysis, two groups of collocation methods
that are best for solving DAEs of indices 2 and 3 were selected. For a given s, each group constitutes a one
parameter family whose free parameter we denote by .
For the methods in the first group, some of the nodes are
c1 = 0, cs 1 = 1 , cs = 1, (1.6)
while the other nodes are chosen so as to ensure the order 2 s 3 . We recommend the use of these methods
for solving DAEs of index 2. For the methods in the second group, we have
c1 = 0, cs 2 = 1 , cs 1 = 1 + , cs = 1, (1.7)
while the other nodes are chosen so as to ensure the order 2 s 4 . We recommend the use of these methods
for DAEs of index 3. Our numerical tests showed that, for DAEs of indices 2 and 3, such methods are supe
rior to the available RungeKutta methods if is small.
Note that, for a small , the positions of nodes (1.6) make it possible to obtain a good estimate for the
second derivative of the solution at the last stage, while the positions of nodes (1.7) also permit to obtain
an estimate for the third derivative. In the limit as 0 , we have a multiple node, which corresponds to
the use of higher derivatives. (Collocation methods with higher derivatives were examined in [1720].) In
what follows, we show that these methods as applied to equations of higher indices amplify the property
of stiff accuracy. In practice, this implies that no order reduction occurs when DAEs of indices 2 and 3 are
solved.

2. MODEL EQUATIONS
The order reduction phenomenon observed in solving stiff problems was explained using the model
equation proposed in [21] (see also [1]). The minimization of the solution error performed for this and
other simple equations made it possible to design explicit and diagonally implicit RungeKutta methods
having an improved accuracy for stiff problems (see [1012, 22]). For DAEs of higher indices, model
equations permitting an analysis of the solution error were proposed in [13, 14]. The author was able to
derive expressions for the global error for some model equations. The minimization of this error made it
possible to increase the realistic order of the DIRK methods as applied to DAEs of indices 2 and 3. Thus,
simple model equations turned out to be a convenient tool for the construction of RungeKutta methods
of improved accuracy.
When DAEs of higher indices are numerically solved, the algebraic variables (namely, the zcomponent
in Eqs. (1.2) and the ucomponent in Eqs. (1.4)) are most inaccurate. Let us examine the accuracy of these
components for the system
0 = y (t ) , y 0 = ( t 0 ), (2.1a)
y ' = z, z 0 = ' (t 0 ), (2.1b)
z ' = u, u0 = '' (t 0 ) . (2.1c)
These equations describe the successive differentiation of the function (t ) . Equation (2.1a) is of index 1
and has the sole algebraic variable y. By adding Eq. (2.1b), we obtain a system of index 2 for which y
becomes a differential variable, while z is an algebraic variable. The addition of Eq. (2.1c) yields a system
of index 3 with y and z as differential variables and u as an algebraic variable. The numerical solution of
these equations using an implicit RungeKutta method leads to the formulas
yn+1 = yn + hb TZ, z n+1 = z n + hb TU, un+1 = un + hb TU ' , (2.2)

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


1376 SKVORTSOV

where the vectors Z, U, and U ' are found from the relations

Y = eyn + hAZ = = [ (t n + c1h) , , (t n + cs h)] ,


T

e = [1, , 1] .
T
Z = ez n + hAU, U = eun + hAU ',
If A is an invertible matrix, then

Z = h 1A 1 ( e y n ) ,
U = h 2 A 2 ( eyn ) h 1A 1ez n, (2.3)
3 3 2 2 1 1
U ' = h A ( ey n ) h A ez n h A e un.
Substituting (2.3) into (2.2), we obtain expressions for the numerical solution at the current step. The glo
bal errors are specified by the difference equations

n+1 yn+1 = a0 ( n yn ) + yn+1,


'n+1 z n+1 = a0('n z n) + h 1a1 ( n yn ) + z n+1, (2.4)
''n+1 un+1 = a0(''n un) + h 1a1('n z n) + h 2a2 ( n yn ) + un+1,
where

a0 = R ( ) = 1 b T A 1e, a1 = lim z ( R ( z ) a0 ) = b T A 2e,


z
(2.5)
a2 = lim z [ z ( R ( z ) a0 ) a1] = b A 3e.
T

The local errors have the form


1
y n+1 = ( n+1 n ) b A ( e n ) ,
T

1 T 2
z n+1 = ('n+1 'n ) h b A ( e n c h'n ),
2 T 3
un+1 = (''n+1 ''n ) h b A ( e n c h'n A e h ''n ).
2 2

For methods of stage order q 2, it holds that A 2e = c 2 2 ; consequently,

2 2

un+1 = (''n+1 ''n ) h 2 b T A 3 e n c h'n c h ''n .
2
Expanding (t ) into a Taylor series, we obtain
i i 1
y n+1 = (i )
e yi n h ,
i!
z n+1 = e ( ) (ih 1) ! ,
zi n
i

i =q +1 i =q +1
(2.6)
i 2
d (t )
i
un+1 = e (i )
ui n
h
( i 2) !
, n = (i )
i
dt t =t n
,
i =q +1

where the coefficients e yi , ezi , and eui depend only on the coefficients of the methods and are given by the
formulas

e yi = 1 b T A 1c i , ezi = 1 1 b T A 2c i , eui = 1 1 b T A 3c i . (2.7)


i i (i 1)

It is assumed that q 2 in the expression for un+1.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


RUNGEKUTTA COLLOCATION METHODS 1377

Now, we consider methods with the Butcher table of the form


0 0 0  0
c2 a21 a22  a2s
0 0 0
   
= c a A ,
cs 1 as 1,1 as 1,2 as 1,s
1 b1 b2  bs b1 b T
b1 b2  bs
where A is an invertible matrix. (These assumptions cover the ESDIRK, Lobatto IIIA, and SAFERK
methods.) An analysis of the numerical solution of Eqs. (2.1) shows that, for these methods, formu
las (2.4)(2.7) remain valid if A, b, and c are replaced with A , b , and c , respectively, and a0 , a1 , and
a2 are calculated as follows:
a0 = R ( ) = 1 b T A 2c, a1 = lim z ( R ( z ) a0 ) = b T A 3c,
z

a2 = lim z [ z ( R ( z ) a0 ) a1] = b A 4c.


T

z
Let us discuss the results of this analysis. It is evident from (2.6) that, in general, local errors in the pro
cess of solving Eqs. (2.1) admit the estimates
( )
yn+1 = O h q +1 , ( )
z n+1 = O h q , ( )
un+1 = O h q 1 ,
where q is the stage order. Relations (2.4) and (2.5) imply that, for R ( ) < 1, the global errors have the
same orders as the corresponding local errors, whereas, for R ( ) = 1, the former have lower orders than
the latter (excepting the variable y for R ( ) = 1). Note that these error estimates are in complete agree
ment with the theoretical results obtained for equations of the more general form (1.2) and (1.4) in [1, 35].
Equations (2.1) are a particular case of DAEs. Consequently, expressions (2.4) and (2.6) make it pos
sible to derive conditions that, together with the classical order conditions, are necessary (though not nec
essarily sufficient) for obtaining the required accuracy when solving DAEs of a more general form. Denote
by p y , p z , and pu the orders of convergence for the corresponding components. Assume that R ( ) < 1,
py > q + 1, pz > q , and pu > q 1. Then the conditions
eyi = 0, i = q + 1, , py 1,
ezj = 0, j = q + 1, , pz ,
euk = 0, k = q + 1, , pu + 1,
are necessary for attaining the prescribed orders. Some other order conditions for DAEs of indices 2 and 3
derived from model equations are given in [13, 14].
Stiffly accurate methods ensure that the algebraic relations in a system of DAEs are satisfied exactly.
For these methods, all the coefficients e yi in (2.6) are zero. It follows that, if a nonstiff DAE of index 1 is
solved, then the order of both differential and algebraic components is identical to the classical order of
the method (see [1, Section VI.1]). When a stiffly accurate method is applied to a DAE of index 2 or 3,
then the order of the ycomponent can be higher than q + 1; however, the orders of the z and ucompo
nents remain low ( pz = q and pu = q 1 if R ( ) < 1; see [4, Theorem 5.2; 5, Theorem 6.1]). Therefore, we set
ourselves the task of constructing methods that guarantee an improved accuracy for these components.

3. METHODS FOR EQUATIONS OF INDEX 2


Consider sstage RungeKutta collocation methods for which c1 = 0 and cs = 1. If the other nodes are
known, then the matrix of coefficients can be found from Eqs. (1.5), where we set q = s :
1
0 0  0 1 0  0

c2 c2 2 c2 s 1
2 s
c2  c2s 1
A=       .
s 1
cs 1 cs 1 2  cs 1 s 1 cs 1  cs 1
2 s

1 1 2  1 s 1 1  1

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


1378 SKVORTSOV

We take cs 1 = 1 . The nodes c2, , cs 2 are chosen so as to ensure the maximal order of a method
(which, in the case under consideration, is 2 s 3 ). Then, for s = 4, the node c2 is determined from the
equality

c2 1 1 1 2
2
c2 (1 ) 2 1 1 3
det 3 = 0,
c 2 (1 ) 3 1 1 4
4
c2 (1 ) 4 1 1 5
while, for s = 5, the nodes c2 and c3 are determined from the conditions

c2 c3 1 1 1 2 c2 c3 1 1 1 2
2 2
c2 c32 (1 ) 2 1 1 3 c2 c32 (1 ) 2 1 1 3
det c23 c33 (1 ) 3 1 1 4 = 0, det c23 c33 (1 ) 3 1 1 4 = 0.
4 4
c2 c34 (1 ) 4 1 1 5 c2 c34 (1 ) 4 1 1 5
5 6
c 2 c35 (1 ) 5 1 1 6 c2 c36 (1 ) 6 1 1 7
This results in the methods for s = 3, 4, and 5 that have the following nodes:

s = 3, c1 = 0, c2 = 1 , c3 = 1, (3.1)

s = 4, c1 = 0, c2 = 2 5 , c3 = 1 a, c4 = 1, (3.2)
5 10

s = 5, c1 = 0, c4 = 1 , c5 = 1,
(3.3)
c2,3 = 35 33 + 6 2452 490 + 333 88 + 8 .
2 4 3 2

70 70 + 14
Actually, these methods are identical to the SAFERK methods treated in [15]; only the recommended val
ues of the free parameter are different. Their order is p = 2s 3 , which follows from [1] (see Theorem IV.5.1
and Lemma IV.5.4). For certain values of , these methods are Astable.
The stability function of method (3.1) is

6 + 2 (1 + ) z + z 2
R (z) = .
6 ( 4 2) z + (1 ) z 2

In the limit as 0 , this function is identical to the (1, 2)Pad approximation to the exponential func
tion. (The same is true of the twostage RADAU IIA method.) The coefficients ezi (see (2.6) and (2.7)) are
given by the formula

1 3 + 2 (1 )
2 3 i
ezi = 1 .
i (1 )
2

( ) ( )
For small , they can be written as e zi = 6 5i + i 2 2i + O 2 ; thus, all the coefficients ezi tend to zero
as 0 .
The methods with nodes (3.2) and (3.3) have similar properties. As decreases, their stability function
approaches the (s 2, s 1)Pad approximation, while the coefficients ezi tend to zero. Thus, by mini

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


RUNGEKUTTA COLLOCATION METHODS 1379

mizing the error in the solution to system (2.1a), (2.1b), we obtain nearly Lstable methods for which is
small.
In the limit as 0 , the above methods give rise to methods that use the second derivative calculated
at the right endpoint of the interval [t n, t n+1] . For instance, the use of nodes (3.1) results in the method with
the computational scheme (as applied to DAE (1.2))

( )
2
y n+1 = y n + h 1 y 'n + 2 y 'n+1 h y ''n+1,
3 3 6

( )
2
z n+1 = z n + h 1 z 'n + 2 z 'n+1 h z ''n+1, (3.4)
3 3 6
0 = g ( y n+1 ) , 0 = g y ( y n+1 ) y 'n+1.

For this method, not only the equality 0 = g ( y ) is exactly satisfied, but also the relation obtained by differ
entiating the algebraic equation is fulfilled. In particular, this implies that no error occurs when Eqs. (2.1a)
and (2.1b) are solved. Thus, the method as applied to DAEs of index 2 amplifies the property of stiff accuracy.

4. METHODS FOR EQUATIONS OF INDEX 3


The methods discussed above use nodes for which good estimates of the first and second derivatives of
the solution at the point t = t n+1 are possible. This permits to improve the accuracy of the zcomponent.
By analogy, we choose nodes (1.7) in order to solve DAEs of index 3. This makes also possible to obtain a
good estimate for the third derivative. In this case, the maximal possible order of a method is 2 s 4 . For
s = 4, 5, the resulting methods have the nodes

s = 4, c1 = 0, c2 = 1 , c3 = 1 + , c4 = 1, (4.1)

c2 = 1 5 2 ,
2
s = 5, c1 = 0, c3 = 1 , c4 = 1 + , c5 = 1. (4.2)
3 10
For these methods, the relation 0 implies

a0 = R ( ) 0, a1 = lim z ( R ( z ) a0 ) 0,
z

and the stability function R ( z ) approaches the (s 3, s 1)Pad approximation to the exponential func
tion. Furthermore, all the coefficients ezi and eui (see (2.6)) tend to zero as 0 .
In the limit as 0 , we obtain methods that use the second and third derivatives calculated at the
endpoint of the integration step. For instance, the use of nodes (4.1) results in the method

( )
2 3
y n+1 = y n + h 1 y 'n + 3 y 'n+1 h y ''n+1 + h y '''
n+1 . (4.3)
4 4 4 24
For these methods, the relations obtained by differentiating the algebraic equation in system (1.4) once
and twice are exactly satisfied. In particular, all the equations in system (2.1) are solved exactly. Note that
methods (3.4) and (4.3) belong to the class of Obrechkoff methods (see [17]).

5. NUMERICAL RESULTS
Denote the methods with nodes (3.1)(3.3) by CRKDAE II (Collocation RungeKutta methods for
DAEs of index II) and the methods with nodes (4.1) and (4.2) by CRKDAE III. We applied these methods
to solving several test problems of indices 2 and 3 for various values of . For comparison, we also used the
RADAU IIA methods, which, for a given number of stages, have the maximal order among stiffly accurate

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


1380 SKVORTSOV

Table 1
CRKDAE II CRKDAE III
s
1/max|aij| 2/max|aij| 3/max|aij| 1/max|aij| 2/max|aij| 3/max|aij|
3 0.2/1.04 0.02/8.50 0.002/83.5
4 0.03/1.06 0.003/9.38 0.0003/92.7 0.3/1.06 0.1/7.90 0.03/92.1
5 0.01/0.91 0.001/8.40 0.0001/83.2 0.1/0.86 0.03/9.09 0.01/83.2

Table 2

r=2 r=3 r=4


Method
ey ez ey ez ey ez

RADAU IIA 6.12 104 8.27 103 3.78 106 1.32 103 3.72 108 1.84 104
Lobatto IIIA 7.44 105 6.08 102 1.57 107 1.36 103 1.60 109 5.95 104
CRKDAE II (1) 3.73 104 5.56 104 3.57 106 9.79 106 3.42 108 5.72 107
CRKDAE II (2) 5.83 104 3.16 104 3.92 106 2.81 106 3.53 108 7.07 108
CRKDAE II (3) 6.04 104 3.04 104 3.96 106 2.06 106 3.52 108 2.30 108
CRKDAE III (1) 8.39 105 8.91 105 8.60 107 1.56 106
CRKDAE III (2) 1.17 104 6.31 105 9.40 107 5.75 107
CRKDAE III (3) 1.21 104 6.09 105 9.48 107 4.85 107

Lstable methods, and the Lobatto IIIA methods, which have the maximal order among stiffly accurate
Astable methods.
Let us discuss the results obtained for two problems. To examine the dependence of the error on ,
three different values 1, 2 , and 3 of this parameter were taken for each method. When decreases, the
coefficients in the last two columns of A for CRKDAE II methods and those in the last three columns for
CRKDAE III methods increase. The reason is that, actually, the last stages perform the numerical differ
1
entiation of the righthand side. For sufficiently small , these coefficients grow at the rate for
2
CRKDAE II methods and at the rate for CRKDAE III methods. The growth of the coefficients leads
to that the roundoff errors increase; consequently, the value of should be bounded below. The value 1
was chosen so that the maximal modulus of the coefficients in A was approximately equal to one. Each
subsequent value (that is, 2 and 3 ) raises the order of the maximal coefficient by about one. The values
of and the corresponding values of the maximal coefficient are presented in Table 1.
Our first problem is of index 2. It is given by the equations

y1' = ( y1y2 z ) ,
14
(
y2' = y1 y1 + y2 z ,
2
) 0 = y1 y2,
2

y1 ( 0) = y2 ( 0) = z ( 0) = 1, 0 t 2.

The solution is y1 (t ) = z (t ) = e t , y2 (t ) = e 2t . Since the computational costs are proportional to the num
ber r of implicit stages, we compare the methods with the same r (as was also done in [15]); that is, we set
r = s for the RADAU IIA methods and r = s 1 for the Lobatto IIIA and CRKDAE methods. The step
size is chosen so that, in total, 24 implicit stages are performed; that is, h = r 12 . The maximal relative
errors e y and e z , calculated for the corresponding components over the entire interval, are shown in Table 2.
For the RADAU IIA and Lobatto IIIA methods, the error in the zcomponent was much larger than that
in the ycomponent; however, for the CRKDAE methods, these errors differ only slightly. Suppose that
the maximal error over all the components is taken as an accuracy criterion. Then the CRKDAE methods

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


RUNGEKUTTA COLLOCATION METHODS 1381

Table 3

r=3 r=4
Method
ey ez eu ey ez eu

RADAU IIA 2.87 105 8.91 104 1.50 102 1.00 106 1.26 104 4.16 103
Lobatto IIIA 7.95 104 7.53 103 5.11 101 4.19 105 6.68 104 1.22 101
CRKDAE II (1) 1.50 106 8.16 106 8.29 104 3.01 108 4.77 107 1.24 104
CRKDAE II (2) 1.53 106 5.77 106 7.72 104 2.66 108 1.20 107 1.09 104
CRKDAE II (3) 1.53 106 5.51 106 7.68 104 2.63 108 8.57 108 1.08 104
CRKDAE III (1) 1.39 105 3.47 105 1.79 104 2.23 107 8.33 107 4.93 106
CRKDAE III (2) 5.64 106 1.75 105 1.82 105 2.23 107 2.27 107 5.72 107
CRKDAE III (3) 7.78 106 1.89 105 1.24 105 2.23 107 1.73 107 1.84 107

Table 4

r=2 r=3 r=4


Method
py pz py pz py pz

RADAU IIA 3 2 5 3 7 4
Lobatto IIIA 4 2 6 4 8 4
CRKDAE II 3 3 5 4 7 5
CRKDAE II ( 0) 3 3 5 5 7 7
CRKDAE III 4 4 6 5
CRKDAE III ( 0) 4 4 6 6

Table 5

r=3 r=4
Method
py pz pu py pz pu

RADAU IIA 4 3 2 6 4 3
Lobatto IIIA 4 4 2 4 4 2
CRKDAE II 5 4 3 7 5 4
CRKDAE II ( 0) 5 5 3 7 7 4
CRKDAE III 4 4 3 6 5 4
CRKDAE III ( 0) 4 4 4 6 6 6

significantly outperform the RADAU IIA and Lobatto IIIA methods. The best results are obtained using
the CRKDAE II methods.
Our second problem is of index 3. It is given by the equations

y1' = ( y1y2 z1z 2 ) , y2' = y1 ( y2 3z 2 ) z1 ,


16

z1' = z1z 2u ( y1y2 ) , z 2' = ( y1y2 + z1z 2 ) u ,


0= y12 y2, y1 ( 0) = y2 ( 0) = z1 (0) = z 2 (0) = u (0) = 1, 0 t 2.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


1382 SKVORTSOV

The solution is y1 (t ) = z1 (t ) = u (t ) = e t , y2 (t ) = z 2 (t ) = e 2t . The maximal relative errors for h = r 12 are


presented in Table 3. This time, the CRKDAE III methods are most advantageous. They yield the most
accurate ucomponent.
It is seen from Tables 2 and 3 that, when the CRKDAE methods are used with a sufficiently small ,
the distinction between the errors e y and e z is insignificant. Variations in the step size do not affect notice
ably the relation between these errors. This observation permits us to conjecture that the orders of conver
gence are identical for the y and zcomponents if is sufficiently small. A similar conjecture about the
convergence of all the components can be made when DAEs of index 3 are solved using the CRKDAE III
methods.
In order to verify these conjectures, we performed a number of tests with the aim to determine the real
istic orders of the individual components. The orders were estimated using the formula

log ( e1 e2 )
p= ,
log ( h1 h2 )

where e1 and e2 are the errors corresponding to the calculations with the constant step size equal to h1
and h2 , respectively. For the CRKDAE methods, the orders were estimated for fairly large and also for
0 . (In the latter case, the value of was reduced until the estimates for the orders virtually ceased to
change.) The results obtained for DAEs of index 2 are presented in Table 4, and those for DAEs of index 3
are presented in Table 5. The estimates for the RADAU IIA methods are completely identical to the the
oretical estimates derived in [1, Table VII.4.1] for DAEs of index 2 and in [5, Theorem 6.1] for DAEs of
index 3. The estimates for the Lobatto IIIA and CRKDAE methods as applied to DAEs of index 2 are
identical to the estimates obtained in [4, Theorem 5.2]. The author is not aware of the theoretical
results concerning other positions in Tables 4 and 5 (including the case where 0 , that is, the case
of multiple nodes).

6. CONCLUDING REMARKS
Our numerical tests showed that CRKDAE are promising methods for solving DAEs of indices 2 and 3.
These methods are able to significantly reduce the order reduction phenomenon, which explains why they
turned out to be more accurate than the rival methods. Their drawback is the necessity of choosing a small
value of , which can lead to large roundoff errors. However, we verified that these methods preserve a
tangible superiority over the other methods even for relatively large , when the roundoff errors are prac
tically negligible.
The results obtained in this paper make it possible to reconsider the views on methods with multiple
nodes (and higher derivatives), which can be fairly efficient for DAEs of higher indices. These are Obre
chkoff methods with two multiple nodes at both endpoints of the interval [t n, t n+1] (see [17]) and more gen
eral methods whose nodes can be positioned at intermediate points of this interval (see [1820]).

REFERENCES
1. E. Hairer and G. Wanner, Solving Ordinary Differential Equations II: Stiff and DifferentialAlgebraic Problems
(SpringerVerlag, Berlin, 1996; Mir, Moscow, 1999).
2. D. Yu. Pogorelov, Numerical Modeling of the Motion of Systems of Solids, Comput. Math. Math. Phys. 35,
501506 (1995).
3. E. Hairer, Ch. Lubich, and M. Roche, The Numerical Solution of DifferentialAlgebraic Systems by RungeKutta
Methods, Lecture Notes in Mathematics (SpringerVerlag, Berlin, 1989), Vol. 1409.
4. L. Jay, Convergence of a Class of RungeKutta Methods for DifferentialAlgebraic Systems of Index 2, BIT
33 (1), 137150 (1993).
5. L. Jay, Convergence of RungeKutta Methods for DifferentialAlgebraic Systems of Index 3, Appl. Numer.
Math. 17 (2), 97118 (1995).
6. R. Alexander, Diagonally Implicit RungeKutta Methods for Stiff ODEs, SIAM J. Numer. Anal. 14, 1006
1021 (1977).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012


RUNGEKUTTA COLLOCATION METHODS 1383

7. F. Cameron, M. Palmroth, and R. Piche, Quasi Stage Order Conditions for SDIRK Methods, Appl. Numer.
Math. 42 (13), 6175 (2002).
8. R. Alexander, Design and Implementation of DIRK Integrators for Stiff Systems, Appl. Numer. Math. 46 (1),
117 (2003).
9. A. Kvrn, Singly Diagonally Implicit RungeKutta Methods with an Explicit First Stage, BIT 44, 489502
(2004).
10. L. M. Skvortsov, Diagonally Implicit RungeKutta FSALMethods for Stiff and DifferentialAlgebraic Sys
tems, Mat. Model. 14 (2), 317 (2002).
11. L. M. Skvortsov, Accuracy of RungeKutta Methods Applied to Stiff Problems, Comput. Math. Math. Phys.
43, 13201330 (2003).
12. L. M. Skvortsov, Diagonally Implicit RungeKutta Methods for Stiff Problems, Comput. Math. Math. Phys.
46, 21102123 (2006).
13. L. M. Skvortsov, Model Equations for Accuracy Investigation of RungeKutta Methods, Mat. Model. 22 (5),
146160 (2010).
14. L. M. Skvortsov, Diagonally Implicit RungeKutta Methods for Differential Algebraic Equations of Indices
Two and Three, Comput. Math. Math. Phys. 50, 9931005 (2010).
15. S. GonzalezPinto, D. HernandezAbreu, and J. I. Montijano, An Efficient Family of Strongly AStable
RungeKutta Collocation Methods for Stiff Systems and DAEs. Part I: Stability and Order Results, J. Com
put. Appl. Math. 234, 11051116 (2010).
16. S. GonzalezPinto and D. HernandezAbreu, Global Error Estimates for a Uniparametric Family of Stiffly
Accurate RungeKutta Collocation Methods on Singularly Perturbed Problems, BIT 51 (1), 155175 (2011).
17. E. Hairer, S. P. Nrsett, and G. Wanner, Solving Ordinary Differential Equations. I: Nonstiff Problems (Springer
Verlag, Berlin, 1987; Mir, Moscow, 1990).
18. S. M. Aulchenko, A. F. Latypov, and Yu. V. Nikulichev, A Method of Numerical Integration for a System of
Ordinary Differential Equations with the Use of the Hermitian Interpolating Polynomials, Comput. Math.
Math. Phys. 38 (10), 15951601 (1998).
19. G. Yu. Kulikov and A. I. Merkulov, On OneStep Collocation Methods with Higher Derivatives for Solving
Ordinary Differential Equations, Comput. Math. Math. Phys. 44, 16961720 (2004).
20. G. Yu. Kulikov and E. Yu. Khrustaleva, Automatic Step Size and Order Control in OneStep Collocation
Methods with Higher Derivatives, Comput. Math. Math. Phys. 50, 10061023 (2010).
21. A. Prothero and A. Robinson, On the Stability and Accuracy of OneStep Methods for Solving Stiff Systems
of Ordinary Differential Equations, Math. Comput. 28 (1), 145162 (1974).
22. L. M. Skvortsov, Explicit Stabilized RungeKutta Methods, Comput. Math. Math. Phys. 51, 11531166
(2011).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 52 No. 10 2012

You might also like