当前位置:首页 >> >>

Implicit Integration of the Time-Dependent Ginzburg-Landau Equations of Superconductivity


IMPLICIT INTEGRATION OF THE TIME-DEPENDENT GINZBURG–LANDAU EQUATIONS OF SUPERCONDUCTIVITY
D. O. GUNTER , H. G. KAPER , AND G. K. LEAF?

arXiv:math/9906176v1 [math.NA] 25 Jun 1999

Abstract. This article is concerned with the integration of the time-dependent Ginzburg– Landau (TDGL) equations of superconductivity. Four algorithms, ranging from fully explicit to fully implicit, are presented and evaluated for stability, accuracy, and compute time. The benchmark problem for the evaluation is the equilibration of a vortex con?guration in a superconductor that is embedded in a thin insulator and subject to an applied magnetic ?eld. Key words. time-dependent Ginzburg–Landau equations, superconductivity, vortex solution, implicit time integration AMS subject classi?cations.

1. Introduction. At the macroscopic level, the state of a superconductor can be described in terms of a complex-valued order parameter and a real vector potential. These variables, which determine the superconducting and electromagnetic properties of the system at equilibrium, are found as solutions of the Ginzburg–Landau (GL) equations of superconductivity. They correspond to critical points of the GL energy functional [1, 2], so in principle they can be determined by minimizing a functional. In practice, one introduces a time-like variable and computes equilibrium states by integrating the time-dependent Ginzburg–Landau (TDGL) equations. The TDGL equations, ?rst formulated by Schmid [3] and subsequently derived from microscopic ? principles by Gor’kov and Eliashberg [4], are nontrivial generalizations of the (timeindependent) GL equations, because the time rate of change must be introduced in such a manner that gauge invariance is preserved at all times. We are interested, in particular, in vortex solutions of the GL equations. These are singular solutions, where the phase of the order parameter changes by 2π along any closed contour surrounding a vortex point. Vortices are of critical importance in technological applications of superconductivity. Computing vortex solutions of the GL equations by integrating the TDGL equations to equilibrium has the advantage that the solutions thus found are stable. At the same time, one obtains information about the transient behavior of the system. Integrating the TDGL equations to equilibrium is, however, a time-consuming process requiring considerable computing resources. In simulations of vortex dynamics in superconductors, which were performed on an IBM SP with tens of processors in parallel, using a simple one-step Euler integration procedure, we routinely experienced equilibration times on the order of one hundred hours [5, 6, 7]. Incremental changes would gradually drive the system to lower energy levels. These very long equilibration times arise, of course, because we are dealing with large physical systems undergoing a phase transition. The energy landscape for such systems is a broad, gently undulating plain with many shallow local minima. It is therefore important to develop e?cient integration techniques that remain stable and accurate as the time step increases. In this article we present four integration techniques ranging from fully explicit to fully implicit for problems on rectangular domains in two dimensions. These two? Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (authorname@mcs.anl.gov). This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of Advanced Scienti?c Computing Research, U.S. Department of Energy, under Contract W-31-109-Eng-38.

1

2

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

dimensional domains should be viewed as cross sections of three-dimensional systems that are in?nite and homogeneous in the third direction (orthogonal to the plane of the cross section), which is the direction of the ?eld. The algorithms are scalable in a multiprocessing environment and generalize to three dimensions. We evaluate the performance of each algorithm on the same benchmark problem, namely, the equilibration of a vortex con?guration in a system consisting of a superconducting core embedded in a blanket of insulating material (air) and undergoing a transition from the Meissner state to the vortex state under the in?uence of an externally applied magnetic ?eld. We determine the maximum allowable time step for stability, the number of time steps needed to reach the equilibrium con?guration, and the CPU cost per time step. Di?erent algorithms correspond to di?erent dynamics through state space, so the eventual equilibrium vortex con?guration may di?er from one algorithm to another. Hence, once we have the equilibrium con?gurations, we need some measure to assess their accuracy. For this purpose we use three parameters: the number of vortices, the mean intervortex distance (bond length), and the mean bond angle taken over nearest-neighbor pairs of bonds. When each of these parameters di?ers less than a speci?ed tolerance, we say that the corresponding vortex con?gurations are the same. Our investigations show that one can increase the time step by almost two orders of magnitude, without losing stability, by going from the fully explicit to the fully implicit algorithm. The fully implicit algorithm has a higher cost per time step, but the wall clock time needed to compute the equilibrium solution (the most important measure for practical purposes) is still signi?cantly less. All algorithms yield the same equilibrium vortex con?guration. In Section 2, we present the Ginzburg–Landau model of superconductivity, ?rst in its formulation as a system of partial di?erential equations, then as a system of ordinary di?erential equations after the spatial variations have been approximated by ?nite di?erences. In Section 3, we give four algorithms to integrate the system of ordinary equations: a fully explicit, a semi-implit, an implicit, and a fully implicit algorithm. In Section 4, we present and evaluate the results of the investigation. The conclusions are summarized in Section 5. 2. Ginzburg–Landau Model. The time-dependent Ginzburg–Landau (TDGL) equations of superconductivity [2, 3, 4] are two coupled partial di?erential equations for the complex-valued order parameter ψ = |ψ|eiφ and the real vector-valued vector potential A, (2.1) (2.2) h ?2 2ms D ? ies 1 es h ? + Φ ψ=? ? ? A ψ + aψ ? b|ψ|2 ψ, ?t h ? 2ms i c 1 ?A c ν + ?Φ = ? ? × ? × A + Js . c ?t 4π
2

Here, Js is the supercurrent density, which is a nonlinear function of ψ and A, (2.3) Js = e2 es ? h es es (ψ ? ?ψ ? ψ?ψ ? ) ? s |ψ|2 A = |ψ|2 ? ?φ ? A . h 2ims ms c ms c

The real scalar-valued electric potential Φ is a diagnostic variable. The constants in the equations are h, Planck’s constant divided by 2π; a and b, two positive constants; ? c, the speed of light; ms and es , the e?ective mass and charge, respectively, of the superconducting charge carriers (Cooper pairs); ν, the electrical conductivity; and

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS

3

D, the di?usion coe?cient. As usual, i is the imaginary unit, and ? denotes complex conjugation. The quantity |ψ|2 represents the local density of Cooper pairs. The local time rate of change ?t A of A determines the electric ?eld, E = (1/c)?t A + ?Φ, its spatial variation the (induced) magnetic ?eld, B = ? × A. The TDGL equations describe the gradient ?ow for the Ginzburg–Landau energy, which is the sum of the kinetic energy, the condensation energy, and the ?eld energy, (2.4) E = 1 2ms es h ? ?? A ψ i c
2

b + ?a|ψ|2 + |ψ|4 2

+ |? × A|2 dx.

A thermodynamic equilibrium con?guration corresponds to a minimum of E. The energy functional (2.4) assumes that there are no defects in the superconductor. Material defects can be naturally present or arti?cally induced and can be in the form of point, planar, or columnar defects (quenched disorder). A material defect results in a local reduction of the depth of the well of the condensation energy. A simple way to include material defects in the Ginzburg–Landau model is by assuming that the parameter a depends on position and has a smaller value wherever a defect is present.
2 2.1. Dimensionless Form. Let ψ∞ = a/b, and let λ, ξ, and Hc denote the London penetration depth, the coherence length, and the thermodynamic critical ?eld, respectively,

(2.5)

λ=

m s c2 2 4πψ∞ e2 s

1/2

,

ξ=

h ?2 2ms a

1/2

,

2 Hc = (4πaψ∞ )1/2 .

In this study, we render the TDGL equations dimensionless by measuring lengths in √ units of ξ, time in units of the relaxation time ξ 2 /D, ?elds in units of Hc 2, and 2 energy densities in units of (1/4π)Hc . The nondimensional TDGL equations are (2.6) (2.7) where (2.8) Js = 1 1 1 1 (ψ ? ?ψ ? ψ?ψ ? ) ? 2 |ψ|2 A = |ψ|2 ?φ ? A . 2iκ κ κ κ σ ? i + iΦ ψ = ? ? A ψ + τ ψ ? |ψ|2 ψ, ?t κ ?A + κ?Φ = ?? × ? × A + Js , ?t
2

Here, κ = λ/ξ is the Ginzburg–Landau parameter and σ is a dimensionless resistivity, σ = (4πD/c2 )ν. The coe?cient τ has been inserted to account for defects; τ (x) < 1 if x is in a defective region; otherwise τ (x) = 1. The nondimensional TDGL equations are associated with the dimensionless energy functional (2.9) E= ?? i A ψ κ
2 1 + ?τ |ψ|2 + 2 |ψ|4 + |? × A|2 dx.

4

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

2.2. Gauge Choice. The (nondimensional) TDGL equations are invariant under a gauge transformation, (2.10) Gχ : (ψ, A, Φ) → (ψeiχ , A + κ?χ, Φ ? ?t χ).

Here, χ can be any real scalar-valued function of position and time. We maintain the zero-electric potential gauge, Φ = 0, at all times, using the link variable U, (2.11) U = exp ? i κ A .
x

This de?nition is componentwise: Ux = exp(?iκ?1 Ax (x′ , y, z) dx′ ), . . . . The gauged TDGL equations can now be written in the form (2.12) (2.13) where (2.14) Js,? = 1 Im κ (U? ψ)? ? (U? ψ) , ?? ? = x, y, z. σ
2 ?ψ ? ? = U? 2 (U? ψ) + τ ψ ? |ψ|2 ψ, ?t ?? ?=x,y,z

?A = ?? × ? × A + Js , ?t

2.3. Two-Dimensional Problems. From here on we restrict the discussion to problems on a two-dimensional rectangular domain (coordinates x and y), assuming boundedness in the x direction and periodicity in the y direction. The domain represents a superconducting core surrounded by a blanket of insulating material (air) or a normal metal. The order parameter vanishes outside the superconductor, and no superconducting charge carriers leave the superconductor. The whole system is driven by a time-independent externally applied magnetic ?eld H that is parallel to the z axis, H = (0, 0, H). The vector potential and the supercurrent have two nonzero components, A = (Ax , Ay , 0) and Js = (Jx , Jy , 0), while the magnetic ?eld has only one nonzero component, B = (0, 0, B), where B = ?x Ay ? ?y Ax . 2.4. Spatial Discretization. The physical con?guration to be modeled (superconductor embedded in blanket material) is periodic in y and bounded in x. In the x direction, we distinguish three subdomains: an interior subdomain occupied by the superconducting material and two subdomains, one on either side, occupied by the blanket material. We take the two blanket layers to be equally thick, but do not assume that the problem is symmetric around the midplane. (Possible sources of asymmetry are material defects in the system, surface currents, and di?erent ?eld strengths on the two outer surfaces.) We impose a regular grid with mesh widths hx and hy , (2.15) ?i,j = (xi , xi+1 ) × (yj , yj+1 ), xi = x0 + ihx ; yj = y0 + jhy , i = 0, i = nsx ? 1, i = nex , i = nx .

assuming the following correspondences: Left outer surface: x = x0 + 1 hx , 2 Left interface: x = xnsx ?1 + 1 hx , 2 1 Right interface: x = xnex + 2 hx , 1 Right outer surface: x = xnx + 2 hx ,

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS

5

One period in the y direction is covered by the points j = 1, . . . , ny . We use the symbols Sc and Bl to denote the index sets for the superconducting and blanket region, respectively, (2.16) (2.17) Sc = {(i, j) : (i, j) ∈ [nsx , nex ] × [1, ny ]},

Bl = {(i, j) : (i, j) ∈ [1, nsx ? 1] ∪ [nex + 1, nx ] × [1, ny ]}.

The order parameter ψ is evaluated at the grid vertices, (2.18) ψi,j = ψ(xi , yj ), (i, j) ∈ Sc,

the components Ax and Ay of the vector potential at the midpoints of the respective edges, (2.19) Ax;i,j = Ax (xi + 1 hx , yj ), 2 Ay;i,j = Ay (xi , yj + 1 hy ), (i, j) ∈ Sc ∪ Bl, 2

and the induced magnetic ?eld B at the center of a grid cell, (2.20)
1 Bi,j = B(xi + 2 hx , yj + 1 hy ) 2 Ax;i,j+1 ? Ax;i,j Ay;i+1,j ? Ay;i,j ? , (i, j) ∈ Sc ∪ Bl, = hx hy

see Fig. 2.1. The values of the link variables and the supercurrent are computed from

j+1 Bi,j

Ay;i,j

j i

ψ

i,j

Ax;i,j i+1

Fig. 2.1. Computational cell with evaluation points for ψ, Ax , and Ay .

the expressions (2.21) (2.22) Jx;i,j Ux;i,j = e?iκ hx Ax;i,j , 1 ? = Im ψi,j Ux;i,j ψi+1,j , κhx
?1

Uy;i,j = e?iκ hy Ay;i,j , 1 ? Jy;i,j = Im ψi,j Uy;i,j ψi,j+1 . κhy
?1

The discretized TDGL equations are (2.23) (2.24) (2.25) dψi,j = (Lxx (Ux;·,j )ψ·,j )i + (Lyy (Uy;i,· )ψi,· )j + N (ψi,j ) , (i, j) ∈ Sc, dt dAx;i,j = (Dyy Ax;i,· )j ? (Dyx Ay;·,· )i,j + Jx;i,j , (i, j) ∈ Sc ∪ Bl, σ dt dAy;i,j = (Dxx Ay;·,j )i ? (Dxy Ax;·,· )i,j + Jy;i,j . (i, j) ∈ Sc ∪ Bl, σ dt

6 where (2.26) (2.27) (2.28) (2.29) (2.30) (2.31) (2.32)

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

? ?2 (Lxx (Ux;·,j )ψ·,j )i = hx Ux;i,j ψi+1,j ? 2ψi,j + Ux;i?1,j ψi?1,j , ? (Lyy (Uy;i,· )ψi,· )j = h?2 Uy;i,j ψi,j+1 ? 2ψi,j + Uy;i,j?1 ψi,j?1 , y

?1 ?1 (Dxy Ax;·,· )i,j = hx hy [(Ax;i,j+1 ? Ax;i,j ) ? (Ax;i?1,j+1 ? Ax;i?1,j )] .

?1 (Dyx Ay;·,· )i,j = hx h?1 [(Ay;i+1,j ? Ay;i,j ) ? (Ay;i+1,j?1 ? Ay;i,j?1 )] , y

?2 (Dxx Ay;·,j )i = hx [Ay;i+1,j ? 2Ay;i,j + Ay;i?1,j ] ,

(Dyy Ax;i,· )j = h?2 [Ax;i,j+1 ? 2Ax;i,j + Ax;i,j?1 ] , y

N (ψi,j ) = τi,j ψi,j ? |ψi,j |2 ψi,j ,

The interface conditions are
? (2.33) ψnsx ?1,j = Ux;nsx ?1,j ψnsx ,j , ψnex +1,j = Ux;nex ,j ψnex ,j , j = 1, . . . , ny .

At the outer boundary, B is given, (2.34) B0,j = HLj , Bnx ,j = HRj , j = 1, . . . , ny .

The resulting approximation is second-order accurate [8]. 3. Time Integration. We now address the integration of Eqs. (2.23)–(2.25). The ?rst equation, which controls the evolution of ψ, involves the second-order linear ?nite-di?erence operators Lxx and Lyy , whose coe?cients depend on Ax and Ay , and the local nonlinear operator N , which involves neither Ax nor Ay . Each of the other two equations, which control the evolution of Ax and Ay respectively, involves likewise a second-order linear ?nite-di?erence operator, but with constant coe?cients, and the nonlinear supercurrent operator, which involves ψ, Ax , and Ay . The following algorithms are distinguished by whether the various operators are treated explicitly or implicitly. 3.1. Fully Explicit Integration. Algorithm I uses a fully explicit forward Euler time-marching procedure for ψ, Ax , and Ay . Starting from an initial triple (ψ 0 , A0 , A0 ), we solve for n = 0, 1, . . . , y x
n+1 n ψi,j ? ψi,j n n n n n = Lxx (Ux;·,j )ψ·,j i + Lyy (Uy;i,· )ψi,· j + N ψi,j , (i, j) ∈ Sc, ?t An+1 ? An x;i,j x;i,j n = Dyy An j ? Dyx An i,j + Jx;i,j , (i, j) ∈ Sc ∪ Bl, (3.2) σ x;i,· y;·,· ?t An+1 ? An y;i,j y;i,j n n = Dxx An (3.3) σ y;·,j i ? Dxy Ax;·,· i,j + Jy;i,j . (i, j) ∈ Sc ∪ Bl, ?t

(3.1)

where J n is de?ned in terms of ψ n , An , and An in the obvious way. The initial triple x y is usually chosen so the superconductor is in the Meissner state, with a seed present to trigger the transition to the vortex state. Algorithm I has been described in [8]. It has been implemented in a distributedmemory multiprocessor environment (IBM SP2); the transformations necessary to achieve the parallelism have been described in [9]. The code uses the Message Passing Interface (MPI) standard [10] as implemented in the MPICH software library [11] for domain decomposition, interprocessor communication, and ?le I/O. The code has

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS

7

been used extensively to study vortex dynamics in superconducting media [5, 6, 7]. The underlying algorithm provides highly accurate solutions but requires a signi?cant number of time steps for equilibration. For stability reasons, the time step ?t cannot exceed 0.0025. 3.2. Semi-Implicit Integration. Algorithm II is generated by an implicit treatment of the second-order linear ?nite-di?erence operators Dyy and Dxx in the equations for Ax and Ay , respectively,
n+1 n ψi,j ? ψi,j n n n n n = Lxx (Ux;·,j )ψ·,j i + Lyy (Uy;i,· )ψi,· j + N ψi,j , (i, j) ∈ Sc, ?t An+1 ? An x;i,j x;i,j n (3.5) σ = Dyy An+1 j ? Dyx An i,j + Jx;i,j , (i, j) ∈ Sc ∪ Bl, y;·,· x;i,· ?t An+1 ? An y;i,j y;i,j n (3.6) σ = Dxx An+1 i ? Dxy An i,j + Jy;i,j . (i, j) ∈ Sc ∪ Bl. x;·,· y;·,j ?t

(3.4)

Equations (3.5) and (3.6) lead to two linear systems of equations, (3.7) (3.8) ?t Dyy An+1 = Fi (ψ n , An , An ), i = 1, . . . , nx ? 1, x y x;i σ ?t Dxx An+1 = Gj (ψ n , An , An ), j = 1, . . . , ny , I? x y y;j σ I?

for the vectors of unknowns Ax;i = {Ax;i,j : j = 1, . . . , ny } and Ay;j = {Ay;i,j : i = 1, . . . , nx ? 1}. The matrix Dyy has dimension ny × ny and is periodic tridiagonal with elements ?h?2 , 2h?2 , ?h?2 ; the matrix Dxx has dimension (nx ? 1) × (nx ? 1) y y y and is tridiagonal with elements ?h?2 , 2h?2 , ?h?2 , (except along the edges, because x x x of the boundary conditions). Both matrices are independent of i and j. Furthermore, if the boundary conditions are time independent, they are constant throughout the time-stepping process. Hence, the coe?cient matrices in Eqs. (3.7) and (3.8) need to be factored only once; in fact, the factorization can be done in the preprocessing stage and the factors can be stored. In a parallel processing environment, the coe?cient matrices extend over several processors, so Eqs. (3.7) and (3.8) are broken up in blocks corresponding to the manner in which the computational mesh is distributed among the processor set. We ?rst solve the equations within each processor (inner iterations) and then couple the solutions across processor boundaries (outer iterations). Hence, we deal with interprocessor coupling in an iterative fashion. Two to three inner iterations usually su?ce to reach a desired tolerance for convergence. After each inner iteration, each processor shares boundary data with its neighbors through MPI calls. 3.3. Implicit Integration. Algorithm III combines the semi-implicit treatment of Ax and Ay with an implicit treatment of the order parameter,
n+1 n ψi,j ? ψi,j n+1 n+1 n n n = Lxx (Ux;·,j )ψ·,j i + Lyy (Uy;i,· )ψi,· j + N ψi,j , (i, j) ∈ Sc, ?t An+1 ? An x;i,j x;i,j n (3.10)σ = Dyy An+1 j ? Dyx An i,j + Jx;i,j , (i, j) ∈ Sc ∪ Bl, y;·,· x;i,· ?t An+1 ? An y;i,j y;i,j n (3.11)σ = Dxx An+1 i ? Dxy An i,j + Jy;i,j . (i, j) ∈ Sc ∪ Bl. x;·,· y;·,j ?t

(3.9)

8

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

The second and third equation are solved as in the semi-implicit algorithm of the preceding section. The ?rst equation is solved by a method similar to the method of Douglas and Gunn [12] for the Laplacian. We begin by transforming Eq. (3.9) into an equation for the correction matrix φn+1 = ψ n+1 ? ψ n . The equation has the general form (3.12) (I ? ?t(Lxx + Lyy )) φn+1 = F (ψ n , An , An ). x y

If ?t is su?ciently small, we may replace the operator in the left member by an approximate factorization, (3.13) (I ? ?t(Lxx + Lyy )) ≈ (I ? ?tLxx ) (I ? ?tLyy ) ,

and consider, instead of Eq. (3.12), (3.14) (I ? ?tLxx ) (I ? ?tLyy ) φn+1 = F (ψ n , An , An ). x y

This equation can be solved in two steps, (3.15) (3.16) (I ? ?tLxx ) ? = F,

(I ? ?tLyy ) φn+1 = ?.

The conditions (2.33), which must be satis?ed at the interface between the superconductor and the blanket material, require some care. If we impose the conditions at every time step, then
n+1 n+1 n+1 n n φn+1 nsx ?1,j = Ux;nsx ?1,j φnsx ,j + Ux;nsx ?1,j ? Ux;nsx ?1,j ψnsx ,j , n+1 φn+1 nex +1,j = Ux;nex ,j ?

φn+1 + nex ,j

n+1 Ux;nex ,j

?

n ? Ux;nsx ?1,j

?

n ψnsx ,j ,

for j = 1, . . . , ny . These conditions couple the correction φ to the update of Ax . To eliminate this coupling, we solve Eq. (3.12) subject to the reduced interface conditions (3.17) (3.18)
n+1 n+1 φn+1 nsx ?1,j = Ux;nsx ?1,j φnsx ,j , j = 1, . . . , ny , n+1 φn+1 nex +1,j = Ux;nex ,j ?

φn+1 , j = 1, . . . , ny . nex ,j

When Eq. (3.12) is replaced by Eq. (3.14), these conditions are inherited by the system (3.15). 3.4. Fully Implicit Integration. Algorithm IV uses a fully implicit integration procedure for the order parameter, (3.19)
n+1 n ψi,j ? ψi,j n+1 n+1 n+1 n n = Lxx (Ux;·,j )ψ·,j i + Lyy (Uy;i,· )ψi,· j + N ψi,j , (i, j) ∈ Sc, ?t An+1 ? An x;i,j x;i,j n (3.20)σ = Dyy An+1 j ? Dyx An i,j + Jx;i,j , (i, j) ∈ Sc ∪ Bl, y;·,· x;i,· ?t An+1 ? An y;i,j y;i,j n (3.21)σ = Dxx An+1 i ? Dxy An i,j + Jy;i,j . (i, j) ∈ Sc ∪ Bl. x;·,· y;·,j ?t n+1 The new element here is the term N ψi,j in the ?rst equation.

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS

9

The second and third equations are solved again as in the semi-implicit algorithm. The ?rst equation is solved by a slight modi?cation of the method used in the implicit algorithm of the preceding section, The modi?cation is brought about by the approximation (3.22) N ψ n+1 = τ ψ n+1 ? |ψ n+1 |2 ψ n+1 ≈ τ 1/2 ψ [|ψ|2 + (τ ? |ψ|2 ) exp(?2τ ?t)]
1/2

1 (S (ψ n ) ? ψ n ) , ?t

where S is a nonlinear map, (3.23) S(ψ) = .

(This approximation is explained in the remark below.) Equation (3.19) is again of the form (3.12), but with a di?erent right-hand side, (3.24) (I ? ?t(Lxx + Lyy ))φn+1 = G(ψ n , An , An ). x y

The di?erence is that, where F in Eq. (3.12) contains a term (?t)N (ψ n ), G in Eq. (3.24) contains the more complicated term S (ψ n ) ? ψ n . Remark. The approximation (3.22) is suggested by semigroup theory. Symbolically, (3.25) N (ψ) = lim S(?t)ψ ? ψ . ?t→0 ?t

To ?nd an expression for the “semigroup” S, we start from the continuous TDGL equations (2.6)–(2.8) (zero-electric potential gauge, Φ = 0), using the polar representation ψ = |ψ|eiφ , (3.26) (3.27) (3.28) |ψ|?t φ = 2(?|ψ|) · (?φ ? κ?1 A) + |ψ|? · (?φ ? κ?1 A), σ?t A = ?? × ? × A + κ?1 |ψ|2 (?φ ? κ?1 A). ?t |ψ| = ?|ψ| ? |ψ||?φ ? κ?1 A|2 + τ |ψ| ? |ψ|3 ,

At this point, we are interested in the e?ect of the nonlinear term |ψ|3 on the dynamics. To highlight this e?ect, we concentrate on the time evolution of the scalar u = |ψ| and the vector v = ?φ ? κ?1 A. (In physical terms, u2 is the density of superconducting charge carriers, while u2 v is κ times the supercurrent density.) Ignoring their spatial variations, we have a dynamical system, (3.29) (3.30) u′ = ?u|v|2 + τ u ? u3 , v ′ = ?εu2 v,

where ′ denotes di?erentiation with respect to t, and ε = (κ2 σ)?1 . This system yields a pair of ordinary di?erential equations for the scalars x = u2 and y = |v|2 , (3.31) (3.32) x′ = 2x(τ ? x ? y), y ′ = ?2εxy.

If κ is large, ε is small, and the dynamics are readily analyzed. To leading order, y is constant; y = 0 is the only meaningful choice. (Recall that xy 1/2 is κ times the magnitude of the supercurrent density.) Then the dynamics of x are given by (3.33) x′ = 2x(τ ? x).

10

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

We integrate this equation from t = tn to t, (3.34) In particular, (3.35) x(tn+1 ) = τ x(tn ) , x(tn ) + (τ ? x(tn )) exp(?2τ ?t) x(t) = τ x(tn ) . x(tn ) + (τ ? x(tn )) exp(?2τ (t ? tn ))

where ?t = tn+1 ? tn . Since x(tn ) = |ψ n |1/2 and x(tn+1 ) = |ψ n+1 |1/2 , it follows that (3.36) |ψ n+1 | = τ 1/2 |ψ n | . [|ψ n |2 + (τ ? |ψ n |2 ) exp(?2τ ?t)]1/2

The phase φ of ψ is constant in time. If we multiply both sides by eiφ , we obtain the expression (3.23) for the “semigroup” S. 4. Evaluation. We now present the results of several experiments, where the algorithms described in the preceding section were applied to a benchmark problem. 4.1. Benchmark Problem. The benchmark problem adopted for this investigation was the equilibration of a vortex con?guration in a superconductor (GinzburgLandau parameter κ = 16) embedded in a thin insulator (air), where the entire system was periodic in the direction of the free surfaces (y). The superconductor measured 128ξ in the transverse (x) direction. The thickness of the insulating layer on either side was taken to be 2ξ, so the total width of the system was 132ξ. The period in the y direction was taken to be 192ξ, so the entire con?guration measured 132ξ × 192ξ. 1 The computational grid was uniform, with a mesh width hx = hy = 2 ξ. The periodic boundary conditions in the y direction were handled through ghost points, so the computational grid had 264 × 386 vertices. The index sets for the superconductor and blanket (see Eqs. (2.16) and (2.17)) were (4.1) (4.2) Sc = {(i, j) : i = 5, . . . , 260, j = 1, . . . , 386}, Bl = {(i, j) : i = 1, . . . , 4, 261, . . . , 264, j = 1, . . . , 386}.

The applied ?eld was uniform in y and equally strong on the left and right side of the system, (4.3) HL = HR = H = 0.5.

√ (Units of H are Hc 2, so H ≈ 0.707 . . . Hc ). As there is no transport current in the system, the solution of the TDGL equations tends to an equilibrium state. 4.2. Benchmark Solution. First, preliminary runs were made to determine, for each algorithm, the optimal number of processors in a multiprocessing environment. Figure 4.1 shows the elapsed (wall clock) time for 50 time steps against the number of processors on the IBM SP2. Each algorithm showed a saturation around 16 processors, beyond which any improvement became marginal. All problems were subsequently run on 16 processors. Next, the fully explicit Algorithm I was used to establish a benchmark equilibrium con?guration. Equations (3.1)–(3.3) were integrated with a time step ?t = 0.0025

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS
80

11

70

60

Algorithm I Algorithm II Algorithms III, IV

Total Time (seconds)

50

40

30

20

10

0 0 10 20 30 40 50 60

Number of Processors
Fig. 4.1. Elapsed time for 50 time steps as a function of the number of processors.

(units of ξ 2 /D), the maximal value for which the algorithm remained stable. The evolution of the vortex con?guration was followed by monitoring the number of vortices and their positions. Equilibrium was reached after 10,000,000 time steps, when the number of vortices remained constant and the vortex positions varied less than 1.0×10?6 (units of ξ). The equilibrium vortex con?guration had 116 vortices arranged in a hexagonal pattern; see Fig. 4.2. The elapsed time for the entire computation was 50.81 hours. The elapsed time per time step (0.018 seconds) is a measure for the computational cost of Algorithm I. 4.3. Evaluation of Algorithms II–IV. Once the benchmark solution was in place, each of the remaining algorithms (II–IV) was evaluated for stability, accuracy, and computational cost. The stability limit was found by gradually increasing the time step and integrating until equilibrium. Above the stability limit, the algorithm failed because of arithmetic divergences. Equilibrium was de?ned by the same criteria as for the benchmark solution: no change in the number of vortices and a variation in the vortex positions of less than 1.0 × 10?6 . The results are given in Table 4.1; ?t is the time step at the stability limit (units of ξ 2 /D), N the number of time steps needed to reach equilibrium, T the elapsed (wall clock) time (in hours) needed to compute the equilibrium con?guration, and C the cost (in seconds per time step, C = 3600T /N ). Because each algorithm de?nes its own path through phase space, one cannot expect to ?nd identical equilibrium con?gurations nor equilibrium con?gurations that are exactly the same as the benchmark. The equilibrium vortex con?gurations for the four algorithms were indeed di?erent, albeit slightly. To measure the di?erences quantitatively, we computed the following three parameters: (1) the number of vortices

12

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

Fig. 4.2. Equilibrium vortex con?guration for the benchmark problem.

in the superconducting region, (2) the mean bond length joining neighboring pairs of vortices, and (3) the mean bond angle subtended by neighboring bonds throughout the vortex lattice. In all cases, the number of vortices was the same (116); the mean bond length varied less than 1.0 × 10?3 ξ, and the mean bond angle varied by less than 1.0 × 10?3 radians. Within these tolerances, the equilibrium vortex con?gurations were the same.
Table 4.1 Performance data for Algorithms I–IV.

?t Algorithm I II III IV 0.0025 0.0500 0.1000 0.1900

N 10,000,000 500,000 250,000 131,580

C 0.018 0.103 0.232 0.233

T 50.81 14.32 16.11 8.41

Finally, we evaluated the fully implicit Algorithm IV from the point of view of parallelism. From the benchmark problem we derived two more problems by twice

INTEGRATION OF THE GINZBURG–LANDAU EQUATIONS

13

doubling the size of the system in each direction, while keeping the mesh width the same ( 1 ξ). The resulting computational grid had 528 × 772 vertices for the intermedi2 ate problem and 1056 × 1544 vertices for the largest problem. Speedup was de?ned as the ratio of the wall clock time (exclusive of I/O) to reach equilibrium on p processors divided by the time to reach equilibrium on a single processor for the smallest and intermediate problem, or twice the time to reach equilibrium on two processors for the largest problem. (The largest problem did not ?t on a single processor.) The results are given in Fig. 4.3. The curve for the benchmark problem was obtained as an aver20 1056 x 1544 528 x 772 264 x 386 15

Speedup

10

5

0

0

5

10

15 20 25 30 Number of Processors

35

40

Fig. 4.3. Computational cell with evaluation points for ψ, Ax , and Ay .

age over many runs; the data for the intermediate and largest problem were obtained from single runs, hence they are less smooth. The speedup is clearly linear when the number of processors is small; it becomes sublinear at about 12 processors for the smallest problem, 14 processors for the intermediate problem, and 18 processors for the largest problem. 5. Conclusions. The results of the investigation lead to the following conclusions. (1) One can increase the time step ?t nearly 80-fold, without losing stability, by going from the fully explicit Algorithm I to the fully implicit Algorithm IV. (2) As one goes to the fully implicit Algorithm IV, the complexity of the matrix calculations and, hence, the cost C of a single time step increase. (3) The increase in the cost C per time step is more than o?set by the increase in the size of the time step ?t. In fact, the wall clock time needed to compute the same equilibrium state with the fully implicit Algorithm IV is one-sixth of the wall clock time for the fully explicit Algorithm I. (4) The (physical) time to reach equilibrium—that is, N ?t, the number of time steps needed to reach equilibrium times the step size—is (approximately) the same for all algorithms, namely, 25,000 (units of ξ 2 /D). (5) The fully implicit Algorithm IV displays linear speedup in a multiprocessing environment. The speedup curves show sublinear behavior when the number of processors is large.
REFERENCES [1] V. L. Ginzburg and L. D. Landau, On the theory of superconductivity, Zh. Eksp. Teor. Fiz. (USSR) 20 (1950,) 1064–1082; Engl. transl. in D. ter Haar, L. D. Landau; Men of Physics, Vol. I, Pergamon Press, Oxford, 1965, pp. 138–167 .

14

D. O. GUNTER, H. G. KAPER, AND G. K. LEAF

[2] M. Tinkham, Introduction to Superconductivity (2nd edition), McGraw-Hill, Inc., New York, 1996. [3] A. Schmid, A time dependent Ginzburg–Landau equation and its application to a problem of resistivity in the mixed state, Phys. kondens. Materie, 5 (1966), 302–317. ? [4] L. P. Gor’kov and G. M. Eliashberg, Generalizations of the Ginzburg–Landau equations for non-stationary problems in the case of alloys with paramagnetic impurities, Zh. Eksp. Teor. Fiz., 54 (1968), 612–626; Soviet Phys.—JETP, 27 (1968), 328–334. [5] D. W. Braun et al., Structure of a moving vortex lattice, Phys. Rev. Lett., 76 (1996), 831–834. [6] G. W. Crabtree et al., Time-dependent Ginzburg-Landau simulations of vortex guidance by twin boundaries, Physica C, 263 (1996), 401–408. [7] G. C. Crabtree et al., Vortex motion through defects, Preprint ANL/MCS-P764-0699, Argonne National Laboratory, Argonne, Ill., 1999. [8] W. D. Gropp et al., Numerical simulations of vortex dynamics in type-II superconductors, J. Comp. Phys., 123 (1996), 254–266. [9] N. Galbreath et al., Parallel solution of the three-dimensional, time-dependent GinzburgLandau equation, Proc. Sixth SIAM Conference on Parallel Processing for Scienti?c Computing, R. F. Sincovec, D. E. Keyes, M. R. Leuze, L. R. Petzold, and D. A. Reed (eds.), SIAM, Philadelphia, 1993, pp. 160-164. [10] J. Dongarra et al., MPI–The Complete Reference, Vols. I & II, MIT Press, Cambridge, Mass., 1998. [11] W. Gropp et al., A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard, Technical Report ANL/MCS-P567-0296, Argonne National Laboratory, Argonne, Ill., 1996. [12] J. Douglas and J. E. Gunn, A general formulation of alternating direction methods—Part I: Parabolic and hyperbolic problems, Numerische Mathematik, 6 (1964), 428–453.


赞助商链接
相关文章:
更多相关文章: