Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

The string equation for polynomials

The string equation for polynomials Anal.Math.Phys. (2018) 8:637–653 https://doi.org/10.1007/s13324-018-0239-3 Björn Gustafsson Received: 20 March 2018 / Accepted: 5 June 2018 / Published online: 15 June 2018 © The Author(s) 2018 Abstract For conformal maps defined in the unit disk one can define a certain Poisson bracket that involves the harmonic moments of the image domain. When this bracket is applied to the conformal map itself together with its conformally reflected map the result is identically one. This is called the string equation, and it is closely connected to the governing equation, the Polubarinova–Galin equation, for the evolution of a Hele-Shaw blob of a viscous fluid (or, by another name, Laplacian growth). In the present paper we show that the string equation makes sense and holds for general polynomials. Keywords String equation · Poisson bracket · Polubarinova–Galin equation · Hele-Shaw flow · Laplacian growth · harmonic moment · resultant Mathematics Subject Classification Primary 30C55 · 31A25; Secondary 34M35 · 37K05 · 76D27 1 Introduction This paper is inspired by 15 years of collaboration with Alexander Vasil’ev. It gives some details related to a talk given at the conference “ICAMI 2017 at San Andrés Island, Colombia”, November 26–December 1, 2017, partly in honor of Alexander Vasilév. In Memory of Alexander Vasilév B Björn Gustafsson gbjorn@kth.se Department of Mathematics, KTH, 100 44 Stockholm, Sweden 638 B. Gustafsson My collaboration with Alexander Vasilév started with some specific questions concerning Hele-Shaw flow and evolved over time into various areas of modern math- ematical physics. The governing equation for the Hele-Shaw flow moving boundary problem we were studying is called the Polubarinova–Galin equation, after the two Russian mathematicians Polubarinova-Kochina and Galin who formulated this equa- tion around 1945. Shortly later, in 1948, Vinogradov and Kufarev were able to prove local existence of solutions of the appropriate initial value problem, under the neces- sary analyticity conditions. Much later, around 2000, another group of Russian mathematicians, or mathe- matical physicists, led by Mineev-Weinstein, Wiegmann, Zabrodin, considered the Hele-Shaw problem from the point of view of integrable systems, and the corre- sponding equation then reappears under the name “string equation”. See for example [11,12,14,25]. The integrable system approach appears as a consequence of the dis- covery 1972 by Richardson [15] that the Hele-Shaw problem has a complete set of conserved quantities, namely the harmonic moments. See [24] for the history of the Hele-Shaw problem in general. It is not clear whether the name “string equation” really refers to string theory, but it is known that the subject as a whole has connections to, for example, 2D quantum gravity, and hence is at least indirectly related to string theory. In any case, these matters have been a source of inspiration for Alexander Vasilév and myself, and in our first book [8] one of the chapters has the title “Hele-Shaw evolution and strings”. The string equation is deceptively simple and beautiful. It reads { f, f }= 1, (1) in terms of a special Poisson bracket referring to harmonic moments and with f any normalized conformal map from some reference domain, in our case the unit disk, to the fluid domain for the Hele-Shaw flow. The main question for this paper now is: if such a beautiful equation as (1) holds for all univalent functions, shouldn’t it also hold for non-univalent functions? The answer is that the Poisson bracket does not (always) make sense in the non- univalent case, but that one can extend its meaning, actually in several different ways, and after such a step the string equation indeed holds. Thus the problem is not that the string equation is particularly difficult to prove, the problem is that the meaning of the string equation is ambiguous in the non-univalent case. In this paper we focus on polynomial mappings, and show that the string equation has a natural meaning, and holds, in this case. In a companion paper [3](seealso[2]) we treat certain kinds of rational mappings related to quadrature Riemann surfaces. 2 The string equation for univalent conformal maps We consider analytic functions f (ζ ) defined in a neighborhood of the closed unit disk and normalized by f (0) = 0, f (0)> 0. In addition, we always assume that f has no zeros on the unit circle. It will be convenient to write the Taylor expansion around theoriginonthe form The string equation for polynomials 639 j +1 f (ζ ) = a ζ (a > 0). j 0 j =0 If f is univalent it maps D ={ζ ∈ C :|ζ | < 1} onto a domain  = f (D).The harmonic moments for this domain are M = z dxdy, k = 0, 1, 2,... . The integral here can be pulled back to the unit disk and pushed to the boundary there. This gives 1 1 k  2 k ∗ M = f (ζ ) | f (ζ )| dζ dζ = f (ζ ) f (ζ ) f (ζ )dζ, (2) 2πi 2πi D ∂ D where f (ζ ) = f (1/ζ) (3) denotes the holomorphic reflection of f in the unit circle. In the form in (2)the moments make sense also when f is not univalent. Computing the last integral in (2) by residues gives Richardson’s formula [15]for the moments: M = ( j + 1)a ··· a a ¯ , (4) k 0 j j j +...+ j +k 0 0 k k ( j ,..., j )≥(0,...,0) 1 k This is a highly nonlinear relationship between the coefficients of f and the moments, and even if f is a polynomial of low degree it is virtually impossible to invert it, to obtain a = a (M , M ,...), as would be desirable in many situations. Still there k k 0 1 is, quite remarkably, an explicit expressions for the Jacobi determinant of the change (a , a ,...) → (M , M ,...) when f restricted to the class of polynomials of a fixed 0 1 0 1 degree. This formula, which was proved by to Kuznetsova and Tkachev [13,21]after an initial conjecture of Ullemar [22], will be discussed in depth below, and it is the major tool for the main result of this paper, Theorem 1. There are examples of different simply connected domains having the same har- monic moments, see for example [17,18,26]. Restricting to domains having analytic boundary the harmonic moments are however sensitive for at least small variations of the domain. This can easily be proved by potential theoretic methods. Indeed, arguing on an intuitive level, an infinitesimal perturbation of the boundary can be represented by a signed measure sitting on the boundary (this measure representing the speed of infinitesimal motion). The logarithmic potential of that measure is a continuous function in the complex plane, and if the harmonic moments were insensitive for the perturbation then the exterior part of this potential would vanish. At the same time the interior potential is a harmonic function, and the only way all these conditions can be satisfied is that the potential vanishes identically, hence also that the mea- sure on the boundary vanishes. On a more rigorous level, in the polynomial case the 640 B. Gustafsson above mentioned Jacobi determinant is indeed nonzero. Compare also discussions in [16]. The conformal map, with its normalization, is uniquely determined by the image domain  and, as indicated above, the domain is locally encoded in the sequence the moments M , M , M ,.... Thus the harmonic moments can be viewed as local 0 1 2 coordinates in the space of univalent functions, and we may write f (ζ ) = f (ζ ; M , M , M ,...). 0 1 2 In particular, the derivatives ∂ f /∂ M make sense. Now we are in position to define the Poisson bracket. Definition 1 For any two functions f (ζ ) = f (ζ ; M , M , M ,...), g(ζ ) = 0 1 2 g(ζ ; M , M , M ,...) which are analytic in a neighborhood of the unit circle and 0 1 2 are parametrized by the moments we define ∂ f ∂g ∂g ∂ f { f, g}= ζ − ζ . (5) ∂ζ ∂ M ∂ζ ∂ M 0 0 This is again a function analytic in a neighborhood of the unit circle and parametrized by the moments. The Schwarz function [1,20] of an analytic curve  is the unique holomorphic function defined in a neighborhood of  and satisfying S(z) =¯z, z ∈ . When  = f (∂ D), f analytic in a neighborhood of ∂ D, the defining property of S(z) becomes S ◦ f = f , (6) holding identically in a neighborhood of the unit circle. Notice that f and S depend on the moments M , M , M ..., like f . The string equation asserts that 0 1 2 { f, f }=1(7) in a neighborhood of the unit circle, provided f is univalent in a neighborhood of the closed unit disk. This result was first formulated and proved in [25] for the case of conformal maps onto an exterior domain (containing the point of infinity). For conformal maps to bounded domains a proof based on somewhat different ideas and involving explicitly the Schwarz function was given in [5]. For convenience we briefly recall the proof below. Writing (6) more explicitly as f (ζ ; M , M ,...) = S( f (ζ ; M , M ,...); M , M ,...) 0 1 0 1 0 1 ∂ f and using the chain rule when computing gives, after simplification, ∂ M 0 The string equation for polynomials 641 ∂ f ∂ S { f, f }= ζ · ( ◦ f ). (8) ∂ζ ∂ M Next one notices that the harmonic moments are exactly the coefficients in the expan- sion of a certain Cauchy integral at infinity: 1 w ¯ dw M = (|z| >> 1). k+1 2πi z − w z k=0 Combining this with the fact that the jump of this Cauchy integral across ∂ is z ¯ it follows that S(z) equals the difference between the analytic continuations of the exterior (z ∈  ) and interior (z ∈ ) functions defined by the Cauchy integral. Therefore S(z; M , M ,...) = + function holomorphic in , 0 1 k+1 k=0 and so, since M , M ,... are independent variables, 0 1 ∂ S 1 (z; M , M ,...) = + function holomorphic in . 0 1 ∂ M z Inserting this into (8) one finds that { f, f } is holomorphic in D. Since the Poisson bracket is invariant under holomorphic reflection in the unit circle it follows that { f, f } is holomorphic in the exterior of D (including the point of infinity) as well, hence it must be constant. And this constant is found to be one, proving (7). We wish to extend the above to allow non-univalent analytic functions in the string equation. Then the basic ideas in the above proof still work, but what may happen is that f and S are not determined by the moments M , M ,... alone. Since ∂ f /∂ M is 0 1 0 a partial derivative one has to specify all other independent variables in order to give a meaning to it. So there may be more variables, say f (ζ ) = f (ζ ; M , M ,... ; B , B ,...). (9) 0 1 1 2 Then the meaning of the string equation depends on the choice of these extra variables. Natural choices turn out to be locations of branch points, i.e., one takes B = f (ω ), j j where the ω ∈ D denote the zeros of f inside D. One good thing with choosing the branch points as additional variables is that keeping these fixed, as is implicit in the notation ∂/∂ M , means that f in this case can be viewed as a conformal map into a fixed Riemann surface, which will be a branched covering over the complex plane. But there are also other possibilities of giving a meaning to the string equation, for example by restricting f to the class of polynomials of a fixed degree, as we shall do in this paper. Then one must allow the branch points to move, so this gives a different meaning to ∂/∂ M . 0 642 B. Gustafsson 3 Intuition and physical interpretation in the non-univalent case We shall consider also non-univalent analytic functions as conformal maps, then into Riemann surfaces above C. In general these Riemann surfaces will be branched cov- ering surfaces, and the non-univalence is then absorbed in the covering projection. It is easy to understand that such a Riemann surface, or the corresponding conformal map, will in general not be determined by the moments M , M , M ,... alone. 0 1 2 As a simple example, consider an oriented curve  in the complex plane encircling the origin twice (say). In terms of the winding number, or index, 1 dζ ν (z) = (z ∈ C \ ), (10) 2πi ζ − z this means that ν (0) = 2. Points far away from the origin have index zero, and some other points may have index one (for example). Having only the curve  available it is natural to define the harmonic moments for the multiply covered (with multiplicities ν ) set inside  as 1 1 k k M = z ν (z)dxdy = z zd ¯ z, k = 0, 1, 2,... . π 2πi It is tempting to think of this integer weighted set as a Riemann surface over (part of) the complex plane. However, without further information this is not possible. Indeed, since some points have index ≥ 2 such a covering surface will have to have branch points, and these have to be specified in order to make the set into a Riemann surface. And only after that it is possible to speak about a conformal map f . Thus f is in general not determined by the moments alone. In the simplest non-univalent cases f will be (locally) determined by the harmonic moments together with the location of the branch points. In principle these branch points can be moved freely within regions a constant values (≥ 2) of ν . However, if we restrict f to belong to some restricted class of functions, like polynomials of a fixed degree, it may be that the branch points cannot move that freely. Thus restricting the structure of f can be an alternative to adding new parameters B , B ,... as in (9). This is a way to understand our main result, 1 2 Theorem 1 below. In the following two examples, the first illustrates a completely freely moving branch point, while in the second example the branch point is still free, but moving it forces also the boundary curve f (∂ D) to move. Example 1 Let a 1 −¯aζ f (ζ ) =− · ζ · , |a| ζ − a where |a| > 1. This function maps D onto D covered twice, so the above index function is ν = 2χ . Thus the corresponding moments are M = 2, M = M = M = ··· = 0, 0 1 2 3 The string equation for polynomials 643 independent of the choice of a, which hence is a free parameter which does not affect the moments. The same is true for the branch point B = f (ω) = a|a| 1 − 1 − , (11) |a| where ω = a 1 − 1 − |a| is the zero of f in D. Thus this example confirms the above idea that the branch point can be moved freely without this affecting the image curve f (∂ D) or the moments, while the conformal map itself does depend on the choice of branch point. Example 2 A related example is given by ζ − 2/a ¯ + a/|a| f (ζ ) = cζ · , ζ − a still with |a| > 1. The derivative of this function is (ζ − 1/a ¯ )(ζ − 2a + 1/a ¯ ) f (ζ ) = c · , (ζ − a) which vanishes at ζ = 1/a ¯. The branch point is B = f (1/a ¯ ) = ac/|a| . Also in this case there is only one nonzero moment, but now for a different rea- son. What happens in this case is that the zero of f in D coincides with a pole of the holomorphically reflected function f , and therefore annihilates that pole in the appropriate residue calculation. (In the previous example the reason was that both poles of f were mapped by f onto the same point, namely the origin.) The calcula- tion goes as follows: for any analytic function g in D, integrable with respect to | f | , we have 1 1 2 ∗ g(ζ )| f (ζ )| dζ dζ = g(ζ ) f (ζ ) f (ζ )dζ 2πi 2πi D ∂ D ∗  ∗ = Res g(ζ ) f (ζ ) f (ζ )dζ + Res g(ζ ) f (ζ ) f (ζ )dζ ζ =0 ζ =1/a ¯ = A · g(0) + 0 · g(1/a ¯ ) = Ag(0), 2 2 2 k where A =¯a (2|a| − 1)B . Applied to the moments, i.e. with g(ζ ) = f (ζ ) ,this gives M = A, M = M = ··· = 0. 0 1 2 2 2 2 Clearly we can vary either a or B freely while keeping M =¯a (2|a| − 1)B fixed, so there is again two free real parameters in f for a fixed set of moments. We remark that this example has been considered in a similar context by Sakai [19], and that f (ζ ) is a contractive zero divisor in the sense of Hedenmalm [9,10]. One 644 B. Gustafsson way to interpret the example is to say that f (D) represents a Hele-Shaw fluid region caused by a unit source at the origin when this has spread on the Riemann surface of z − B. See Examples 5.2 and 5.3 in [4]. The physical interpretation of the string equation is most easily explained with reference to general variations of analytic functions in the unit disk. Consider an arbitrary smooth variation f (ζ ) = f (ζ, t ), depending on a real parameter t.We always keep the normalization f (0, t ) = 0, f (0, t)> 0, and f is assumed to be analytic in a full neighborhood of the closed unit disk, with f = 0on ∂ D. Then one may define a corresponding Poisson bracket written with a subscript t: ∂ f ∂g ∂g ∂ f { f, g} = ζ − ζ . (12) ∂ζ ∂t ∂ζ ∂t This Poisson bracket is itself an analytic function in a neighborhood of ∂ D.Itis determined by its values on ∂ D, where we have { f, f } = 2Re[ f ζ f ]. The classical Hele-Shaw flow moving boundary problem, or Laplacian growth, is a particular evolution, characterized (in the univalent case) by the harmonic moments being conserved, except for the first one which increases linearly with time, say as M = 2t + constant. This means that f = 2∂ f /∂ M , which makes { f, f } = 0 0 t 2{ f, f } and identifies the string Eq. (7) with the Polubarinova–Galin equation Re [ f (ζ, t ) ζ f (ζ, t )]= 1,ζ ∈ ∂ D, (13) for the Hele-Shaw problem. Dividing (13)by | f | gives ζ f 1 Re [ f · ]= on ∂ D. |ζ f | |ζ f | Here the left member can be interpreted as the inner product between f and the unit normal vector on ∂ = f (∂ D), and the right member as the gradient of a suitably normalized Green’s function of  = f (D) with pole at the origin. Thus (13) says that ∂ moves in the normal direction with velocity |∇G |, and for the string equation the interpretation becomes ∂ f ∂G 2 = on ∂, normal ∂ M ∂n the subscript “normal” signifying normal component when considered as a vector on ∂. The general Poisson bracket (12) enters when differentiating the formula (2)for the moments M with respect to t for a given evolution. For a more general statement in this respect we may replace the function f (ζ ) appearing in (2)byafunction g(ζ, t ) which is analytic in ζ and depends on t in the same way as h( f (ζ, t )) does, where h is analytic, for example h(z) = z . This means that g = g(ζ, t ) has to satisfy The string equation for polynomials 645 g ˙(ζ, t ) f (ζ, t ) = , (14) g (ζ, t ) f (ζ, t ) saying that g “flows with” f and locally can be regarded as a time independent function in the image domain of f . We then have (cf. Lemma 4.1 in [4]) Lemma 1 Assume that g(ζ, t ) is analytic in ζ in a neighborhood of the closed unit disk and depends smoothly on t in such a way that (14) holds. Then 2π 1 d 1 2 ∗ g(ζ, t )| f (ζ, t )| dζ dζ = g(ζ, t ){ f, f } dθ, (15) 2πi dt 2π D 0 iθ the last integrand being evaluated at ζ = e . As a special case, with g(ζ, t ) = h( f (ζ, t )),wehave Corollary 1 If h(z) is analytic in a fixed domain containing the closure of f (D, t ) then 2π 1 d 1 2 ∗ h( f (ζ, t ))| f (ζ, t )| dζ dζ = h( f (ζ, t )){ f, f } dθ. 2πi dt 2π D 0 Proof The proof of (15) is straight-forward: differentiating under the integral sign and using partial integration we have d d 2 ∗  ∗  ∗  ∗ ¯ ˙ ˙ g| f | dζ dζ = gf f dζ = gf ˙ f + g f f + gf f dζ dt dt D ∂ D ∂ D ∗  ∗   ∗ ∗ ˙ ˙ ˙ = gf ˙ f + g f f − g f f − g( f ) f dζ ∂ D dζ ∗ ∗  ∗  ∗ ˙ ˙ ˙ = (gf ˙ − fg ) f + g( f f − ( f ) f ) dζ = g ·{ f, f } , ∂ D ∂ D which is the desired result. 4 The string equation for polynomials We now focus on polynomials, of a fixed degree n + 1: j +1 f (ζ ) = a ζ , a > 0. (16) j 0 j =0 646 B. Gustafsson The derivative is of degree n, and we denote its coefficients by b : n n j j f (ζ ) = b ζ = ( j + 1)a ζ , (17) j j j =0 j =0 It is obvious from Definition 1 that whenever the Poisson bracket (5) makes sense (i.e., whenever ∂ f /∂ M makes sense), it will vanish if f has zeros at two points which are reflections of each other with respect to the unit circle. Thus the string equation cannot hold in such cases. The main result, Theorem 1, says that for polynomial maps this is the only exception: the string equation makes sense and holds whenever f and f have no common zeros. Two polynomials having common zeros is something which can be tested by the classical resultant, which vanishes exactly in this case. Now f is not really a polyno- n ∗ mial, only a rational function, but one may work with the polynomial ζ f (ζ ) instead. Alternatively, one may use the meromorphic resultant, which applies to meromorphic functions on a compact Riemann surface, in particular rational functions. Very briefly expressed, the meromorphic resultant R(g, h) between two meromorphic functions g and h is defined as the multiplicative action of one of the functions on the divisor of the other. The second member of (18) below gives an example of the multiplicative action of h on the divisor of g. See [7] for further details. We shall need the meromorphic resultant only in the case of two rational functions n n j −k of the form g(ζ ) = b ζ and h(ζ ) = c ζ , and in this case it is closely j k j =0 k=0 related to the ordinary polynomial resultant R (see [23]) for the two polynomials pol g(ζ ) and ζ h(ζ ). Indeed, denoting by ω ,...,ω the zeros of g, the divisor of g is 1 n the formal sum 1 · (ω ) + ··· + 1 · (ω ) − n · (∞), noting that g has a pole of order 1 n n at infinity. This gives the meromorphic resultant, and its relation to the polynomial resultant, as h(ω ) · ··· · h(ω ) 1 1 n R(g, h) = = R (g(ζ ), ζ h(ζ )). (18) pol n n h(∞) b c 0 0 The main result below is an interplay between the Poisson bracket, the resultant and the Jacobi determinant between the moments and the coefficients of f in (16). The theorem is mainly due to Kuznetsova and Tkachev [13,21], only the statement about the string equation is (possibly) new. One may argue that this string equation can actually be obtained from the string equation for univalent polynomials by “analytic continuation”, but we think that writing down an explicit proof in the non-univalent case really clarifies the nature of the string equation. In particular the proof shows that the string equation is not an entirely trivial identity. Theorem 1 With f a polynomial as in (16), the identity ¯ ¯ ∂(M ,... M , M , M ,..., M ) 2 n 1 0 1 n n +3n+1  ∗ = 2a R( f , f ) (19) ∂(a ¯ ,..., a ¯ , a , a ,..., a ) n 1 0 1 n holds generally. It follows that the derivative ∂ f /∂ M makes sense whenever R( f , f ) = 0, and then also the string equation The string equation for polynomials 647 { f, f }=1(20) holds. Proof For the first statement we essentially follow the proof given in [6], but add some details which will be necessary for the second statement. Using Corollary 1 we shall first investigate how the moments change under a general variation of f , i.e., we let f (ζ ) = f (ζ, t ) depend smoothly on a real parameter t. Thus a = a (t ), M = M (t ), and derivatives with respect to t will often be denoted by j j k k a dot. For the Laurent series of any function h(ζ ) = c ζ we denote by coeff (h) i i the coefficient of ζ : 1 h(ζ )dζ coeff (h) = c = . i i i +1 2πi ζ |ζ |=1 By Corollary 1 we then have, for k ≥ 0, 2π d 1 d 1 k 2 k ∗ M = f (ζ, t ) | f (ζ, t )| dζ dζ = f { f, f } dθ k t dt 2πi dt 2π D 0 k ∗ k ∗ = coeff ( f { f, f } ) = coeff ( f ) · coeff ({ f, f } ). 0 t +i −i t i =0 k ∗ Note that f (ζ ) contains only positive powers of ζ and that { f, f } contains powers with exponents in the interval −n ≤ i ≤ n only. In view of (16) the matrix v = coeff ( f )(0 ≤ k, i ≤ n) (21) ki +i is upper triangular, i.e., v = 0for 0 ≤ i < k, with diagonal elements being powers ki of a : v = a . kk Next we shall find the coefficients of the Poisson bracket. These will involve the coefficients b and a ˙ , but also their complex conjugates. For a streamlined treatment k j it is convenient to introduce coefficients with negative indices to represent the complex conjugated quantities. The same for the moments. Thus we define, for the purpose of this proof and the forthcoming Example 3, ¯ ¯ M = M , a =¯a , b = b (k > 0). (22) −k k −k k −k k Turning points are the real quantities M and a = b . 0 0 0 In this notation the expansion of the Poisson bracket becomes ∗  ∗ ∗ −1 ˙ ˙ { f, f } = f (ζ ) · ζ f (ζ ) + f (ζ ) · ζ f (ζ ) (23) t 648 B. Gustafsson − j j − + j + j ˙ ¯ = b a ¯ ζ + b a ˙ ζ = b a ˙ ζ + b a ˙ ζ j j j j , j ≥0 , j ≤0 ≥0, j ≤0 ≤0, j ≥0 ⎛ ⎞ + j −i ⎝ ⎠ = b a ˙ + b a ˙ ζ = b a ˙ + b a ˙ ζ . 0 0 j 0 0 j · j ≤0 i · j ≤0, + j =−i The last summation runs over pairs of indices ( , j ) having opposite sign (or at least one of them being zero) and adding up to −i. We presently need only to consider the case i ≥ 0. Eliminating and letting j run over those values for which · j ≤ 0we therefore get coeff ({ f, f } ) = b a ˙ δ + b a ˙ + b a ˙ . −i t 0 0 i0 −(i + j ) j −(i + j ) j j ≤−i j ≥0 Here δ denotes the Kronecker delta. Setting, for i ≥ 0, ij b + b δ δ , if − n ≤ j ≤−i or 0 ≤ j ≤ n, −(i + j ) 0 i0 0 j u = ij 0 in remaining cases we thus have coeff ({ f, f } ) = u a ˙ . (24) −i t ij j j =0 Turning to the complex conjugated moments we have, with k < 0, −k ∗ ˙ ¯ M = M = coeff ( f ) · coeff ({ f, f } ). k −k −i +i t i =−n Set, for k < 0, i ≤ 0, −k v = coeff ( f ). ki −i −k Then v = 0 when k < i ≤ 0, and v = a . To achieve the counterpart of (24)we ki kk define, for i ≤ 0, b + b δ δ , if − n ≤ j ≤ 0or − i ≤ j ≤ n, −(i + j ) 0 i0 0 j u = ij 0 in remaining cases. This gives, with i ≤ 0, coeff ({ f, f } ) = u a ˙ . +i t ij j j =0 The string equation for polynomials 649 As a summary we have, from (21), (24) and from the corresponding conjugated equations, M = v u a ˙ , −n ≤ k ≤ n, (25) k ki ij j −n≤i, j ≤n where v = coeff ( f ) when 0 ≤ k ≤ i, ki +i −k v = coeff ( f ) when i ≤ k < 0, ki −i v = 0 in remaining cases, ki u = b + b δ δ in index intervals made explicit above, ij −(i + j ) 0 i0 0 j u = 0 in remaining cases. ij We see that the full matrix V = (v ) is triangular in each of the two blocks along ki the main diagonal and vanishes completely in the two remaining blocks. Therefore, its determinant is simply the product of the diagonal elements. More precisely this becomes n(n+1) det V = a . (26) The matrix U = (u ) represents the linear dependence of the bracket { f, f } on ij t f and f , and it acts on the column vector with components a ˙ , then representing the ˙ ˙ linear dependence on f and f . The computation started at (23) can thus be finalized as ∗ −i { f, f } = u a ˙ ζ . (27) t ij j −n≤i, j ≤n Returning to (25), this equation says that the matrix of partial derivatives ∂ M /∂a k j equals the matrix product VU, in particular that ∂(M ,... M , M , M ,..., M ) −n −1 0 1 n = det V · det U. ∂(a ,..., a , a , a ,..., a ) −n −1 0 1 n The first determinant was already computed above, see (26). It remains to connect det U to the meromorphic resultant R( f , f ). ∗  ∗ For any kind of evolution, { f, f } vanishes whenever f and f have a common zero. The meromorphic resultant R( f , f ) is a complex number which has the same vanishing properties as { f, f } , and it is in a certain sense minimal with this property. From this one may expect that the determinant of U is simply a multiple of the resultant. 2n+1 Taking homogenieties into account the constant of proportionality should be b , times possibly some numerical factor. The precise formula in fact turns out to be 2n+1  ∗ det U = 2b R( f , f ). (28) One way to prove it is to connect U to the Sylvester matrix S associated to the n ∗ polynomial resultant R ( f (ζ ), ζ f (ζ )). This matrix is of size 2n × 2n.Bysome pol 650 B. Gustafsson operations with rows and columns (the details are given in [6], and will in addition be illustrated in the example below) one finds that det U = 2b det S. From this (28) follows, using also (18). Now, the string equation is an assertion about a special evolution. The string equa- tion says that { f, f } = 1 for that kind of evolution for which ∂/∂t means ∂/∂ M , t 0 ˙ ˙ in other words in the case that M = 1 and M = 0for k = 0. By what has already 0 k been proved, a unique such evolution exists with f kept on the form (16) as long as R( f , f ) = 0. Inserting M = δ in (25)gives k k0 v u a ˙ = δ , −n ≤ k ≤ n. (29) ki ij j k0 −n≤i, j ≤n It is easy to see from the structure of the matrix V = (v ) that the 0:th column of the ki −1 −1 inverse matrix V , which is sorted out when V is applied to the right member in (29), is simply the unit vector with components δ . Therefore (29) is equivalent to k0 u a ˙ = δ , −n ≤ i ≤ n. (30) ij j i0 −n≤ j ≤n Inserting this into (27) shows that the string equation indeed holds. Example 3 To illustrate the above proof, and the general theory, we compute every- thing explicitly when n = 2, i.e., with 2 3 f (ζ ) = a ζ + a ζ + a ζ . 0 1 2 We shall keep the convention (22) in this example. Thus 2 2 f (ζ ) = b + b ζ + b ζ = a + 2a ζ + 3a ζ , 0 1 2 0 1 2 ∗ −1 −2 −3 f (ζ ) = a ζ + a ζ + a ζ , 0 −1 −2 for example. When the Eq. (25) is written as a matrix equation it becomes (with zeros represented by blanks) ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ a b b a ˙ −2 2 0 −2 ⎜ ˙ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ M a a b b b b a ˙ −1 −1 0 2 1 0 −1 −1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ˙ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ M = 1 b b 2b b b a ˙ . (31) 2 1 0 −1 −2 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ˙ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ M a a b b b b a ˙ 0 1 1 0 −1 −2 1 a b b a ˙ M 0 −2 2 0 The string equation for polynomials 651 Denoting the two 5 × 5 matrices by V and U respectively it follows that the corre- sponding Jacobi determinant is ∂(M , M , M , M , M ) −2 −1 0 1 2 = det V · det U = a · det U. ∂(a , a , a , a , a ) −2 −1 0 1 2 Here U can essentially be identified with the Sylvester matrix for the resultant R( f , f ).Tobeprecise, det U = 2b det S, (32) where S is the classical Sylvester matrix associated to the two polynomials f (ζ ) and 2 ∗ ζ f (ζ ), namely ⎛ ⎞ b b 2 0 ⎜ ⎟ b b b b 2 1 0 −1 ⎜ ⎟ S = . ⎝ ⎠ b b b b 1 0 −1 −2 b b 0 −2 As promised in the proof above, we shall explain in this example the column operations on U leading from U to S, and thereby proving (32) in the case n = 2 (the general case is similar). The matrix U appears in (31). Let U , U , U , U , U −2 −1 0 1 2 denote the columns of U. We make the following change of U : 1 1 U → U − (b U + b U − b U − b U ) 0 0 −2 −2 −1 −1 1 1 2 2 2 2b The first term makes the determinant become half as big as it was before, and the other terms do not affect the determinant at all. The new matrix is the 5 × 5matrix ⎛ ⎞ b b 2 0 ⎜ ⎟ b b b b 2 1 0 −1 ⎜ ⎟ ⎜ ⎟ b b b b b , 2 1 0 −1 −2 ⎜ ⎟ ⎝ ⎠ b b b 1 0 −2 which has b in the lower left corner, with the complementary 4 × 4 block being exactly S above. From this (32) follows. The string Eq. (20) becomes, in terms of coefficients and with a ˙ interpreted as ∂a /∂ M , the linear equation j 0 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ a b b a ˙ 0 2 0 −2 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ a a b b b b a ˙ 0 −1 0 2 1 0 −1 −1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 1 b b 2b b b a ˙ = 1 . 2 1 0 −1 −2 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ a a b b b b a ˙ 0 0 1 1 0 −1 −2 1 a b b a ˙ 0 0 −2 2 Indeed, in view of (31) this equation characterizes the a ˙ as those belonging to an ˙ ˙ evolution such that M = 1, M = 0for k = 0. As remarked in the step from (29)to 0 k (30), the first matrix, V , can actually be removed in this equation. 652 B. Gustafsson Acknowledgements The author wants to thank Irina Markina, Olga Vasilieva, Pavel Gumenyuk, Mauricio Godoy Molina, Erlend Grong and several others for generous invitations in connection with the mentioned conference ICAMI 2017, and for warm friendship in general. Some of the main ideas in this paper go back to work by Olga Kuznetsova and Vladimir Tkachev, whom I also thank warmly. Compliance with Ethical Standards Conflict of interest The author declares that he has no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References 1. Davis, P.J.: The Schwarz Function and Its Applications. The Mathematical Association of America, Buffalo, NY (1974) 2. Gustafsson, B.: The string equation for nonunivalent functions. arXiv:1803.02030 (2018a) 3. Gustafsson, B.: The String Equation for Some Rational Functions. Trends in Mathematics. Birkhäuser, Basel (2018b) 4. Gustafsson, B., Lin, Y.-L.: Non-univalent solutions of the Polubarinova–Galin equation. arXiv:1411.1909 (2014) 5. Gustafsson, B., Teoderscu, R., Vasil, A.: Classical and Stochastic Laplacian Growth, Advances in Mathematical Fluid Mechanics. Birkhäuser, Basel (2014) 6. Gustafsson, B., Tkachev, V.: On the Jacobian of the harmonic moment map. Complex Anal. Oper. Theory 3(2), 399–417 (2009a) 7. Gustafsson, B., Tkachev, V.G.: The resultant on compact Riemann surfaces. Commun. Math. Phys. 286(1), 313–358 (2009b) 8. Gustafsson, B., Vasil, A.: Conformal and Potential Analysis in Hele-Shaw Cells. Advances in Mathe- matical Fluid Mechanics. Birkhäuser, Basel (2006) 9. Hedenmalm, H., Korenblum, B., Zhu, K.: Theory of Bergman spaces. Graduate Texts in Mathematics, vol. 199. Springer, New York (2000) 10. Hedenmalm, H.: A factorization theorem for square area-integrable analytic functions. J. Reine Angew. Math. 422, 45–68 (1991) 11. Kostov, I.K. Krichever, I., Mineev-Weinstein, M., Wiegmann, P.B., Zabrodin, A.: The τ -function for analytic curves. Random matrix models and their applications, Math. Sci. Res. Inst. Publ., vol. 40, pp. 285–299. Cambridge Univeristy Press, Cambridge (2001) 12. Krichever, I., Marshakov, A., Zabrodin, A.: Integrable structure of the Dirichlet boundary problem in multiply-connected domains. Commun. Math. Phys. 259(1), 1–44 (2005) 13. Kuznetsova, O.S., Tkachev, O.S.: Ullemar’s formula for the Jacobian of the complex moment mapping. Complex Var. Theory Appl. 49(1), 55–72 (2004) 14. Mineev-Weinstein, M., Zabrodin, A.: Whitham-Toda hierarchy in the Laplacian growth problem. J. Nonlinear Math. Phys. 8(suppl), 212–218 (2001) 15. Richardson, S.: Hele-Shaw flows with a free boundary produced by the injection of fluid into a narrow channel. J. Fluid Mech. 56, 609–618 (1972) 16. Ross, J., Nyström, D.W.: The Hele-Shaw flow and moduli of holomorphic discs. Compos. Math. 151(12), 2301–2328 (2015) 17. Sakai, M.: A moment problem on Jordan domains. Proc. Am. Math. Soc. 70(1), 35–38 (1978) 18. Sakai, M.: Domains having null complex moments. Complex Var. Theory Appl. 7(4), 313–319 (1987) 19. Sakai, M.: Finiteness of the Family of Simply Connected Quadrature Domains. Potential Theory, pp. 295–305. Plenum, New York (1988) 20. Shapiro, H.S.: The Schwarz Function and Its Generalization to Higher Dimensions, University of Arkansas Lecture Notes in the Mathematical Sciences, 9. Wiley, New York (1992) 21. Tkachev, V.G.: Ullemar’s formula for the moment map II. Linear Algebra Appl. 404, 380–388 (2005) The string equation for polynomials 653 22. Ullemar, C.: Uniqueness theorem for domains satisfying a quadrature identity for analytic functions. Research Bulletin TRITA-MAT-1980-37. Royal Institute of Technology, Department of Mathematics, Stockholm (1980) 23. van der Waerden, B.L.: Moderne Algebra. Springer, Berlin (1940) 24. Vasilév, A.: From the Hele-Shaw experiment to integrable systems: a historical overview. Complex Anal. Oper. Theory 3(2), 551–585 (2009) 25. Wiegmann, P.B., Zabrodin, A.: Conformal maps and integrable hierarchies. Commun. Math. Phys. 213(3), 523–538 (2000) 26. Zalcman, L.: Some inverse problems of potential theory. Integral Geom 63, 337–350 (1987) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Analysis and Mathematical Physics Springer Journals

The string equation for polynomials

Analysis and Mathematical Physics , Volume 8 (4) – Jun 15, 2018

Loading next page...
 
/lp/springer-journals/the-string-equation-for-polynomials-CfK2blgbax
Publisher
Springer Journals
Copyright
Copyright © 2018 by The Author(s)
Subject
Mathematics; Analysis; Mathematical Methods in Physics
ISSN
1664-2368
eISSN
1664-235X
DOI
10.1007/s13324-018-0239-3
Publisher site
See Article on Publisher Site

Abstract

Anal.Math.Phys. (2018) 8:637–653 https://doi.org/10.1007/s13324-018-0239-3 Björn Gustafsson Received: 20 March 2018 / Accepted: 5 June 2018 / Published online: 15 June 2018 © The Author(s) 2018 Abstract For conformal maps defined in the unit disk one can define a certain Poisson bracket that involves the harmonic moments of the image domain. When this bracket is applied to the conformal map itself together with its conformally reflected map the result is identically one. This is called the string equation, and it is closely connected to the governing equation, the Polubarinova–Galin equation, for the evolution of a Hele-Shaw blob of a viscous fluid (or, by another name, Laplacian growth). In the present paper we show that the string equation makes sense and holds for general polynomials. Keywords String equation · Poisson bracket · Polubarinova–Galin equation · Hele-Shaw flow · Laplacian growth · harmonic moment · resultant Mathematics Subject Classification Primary 30C55 · 31A25; Secondary 34M35 · 37K05 · 76D27 1 Introduction This paper is inspired by 15 years of collaboration with Alexander Vasil’ev. It gives some details related to a talk given at the conference “ICAMI 2017 at San Andrés Island, Colombia”, November 26–December 1, 2017, partly in honor of Alexander Vasilév. In Memory of Alexander Vasilév B Björn Gustafsson gbjorn@kth.se Department of Mathematics, KTH, 100 44 Stockholm, Sweden 638 B. Gustafsson My collaboration with Alexander Vasilév started with some specific questions concerning Hele-Shaw flow and evolved over time into various areas of modern math- ematical physics. The governing equation for the Hele-Shaw flow moving boundary problem we were studying is called the Polubarinova–Galin equation, after the two Russian mathematicians Polubarinova-Kochina and Galin who formulated this equa- tion around 1945. Shortly later, in 1948, Vinogradov and Kufarev were able to prove local existence of solutions of the appropriate initial value problem, under the neces- sary analyticity conditions. Much later, around 2000, another group of Russian mathematicians, or mathe- matical physicists, led by Mineev-Weinstein, Wiegmann, Zabrodin, considered the Hele-Shaw problem from the point of view of integrable systems, and the corre- sponding equation then reappears under the name “string equation”. See for example [11,12,14,25]. The integrable system approach appears as a consequence of the dis- covery 1972 by Richardson [15] that the Hele-Shaw problem has a complete set of conserved quantities, namely the harmonic moments. See [24] for the history of the Hele-Shaw problem in general. It is not clear whether the name “string equation” really refers to string theory, but it is known that the subject as a whole has connections to, for example, 2D quantum gravity, and hence is at least indirectly related to string theory. In any case, these matters have been a source of inspiration for Alexander Vasilév and myself, and in our first book [8] one of the chapters has the title “Hele-Shaw evolution and strings”. The string equation is deceptively simple and beautiful. It reads { f, f }= 1, (1) in terms of a special Poisson bracket referring to harmonic moments and with f any normalized conformal map from some reference domain, in our case the unit disk, to the fluid domain for the Hele-Shaw flow. The main question for this paper now is: if such a beautiful equation as (1) holds for all univalent functions, shouldn’t it also hold for non-univalent functions? The answer is that the Poisson bracket does not (always) make sense in the non- univalent case, but that one can extend its meaning, actually in several different ways, and after such a step the string equation indeed holds. Thus the problem is not that the string equation is particularly difficult to prove, the problem is that the meaning of the string equation is ambiguous in the non-univalent case. In this paper we focus on polynomial mappings, and show that the string equation has a natural meaning, and holds, in this case. In a companion paper [3](seealso[2]) we treat certain kinds of rational mappings related to quadrature Riemann surfaces. 2 The string equation for univalent conformal maps We consider analytic functions f (ζ ) defined in a neighborhood of the closed unit disk and normalized by f (0) = 0, f (0)> 0. In addition, we always assume that f has no zeros on the unit circle. It will be convenient to write the Taylor expansion around theoriginonthe form The string equation for polynomials 639 j +1 f (ζ ) = a ζ (a > 0). j 0 j =0 If f is univalent it maps D ={ζ ∈ C :|ζ | < 1} onto a domain  = f (D).The harmonic moments for this domain are M = z dxdy, k = 0, 1, 2,... . The integral here can be pulled back to the unit disk and pushed to the boundary there. This gives 1 1 k  2 k ∗ M = f (ζ ) | f (ζ )| dζ dζ = f (ζ ) f (ζ ) f (ζ )dζ, (2) 2πi 2πi D ∂ D where f (ζ ) = f (1/ζ) (3) denotes the holomorphic reflection of f in the unit circle. In the form in (2)the moments make sense also when f is not univalent. Computing the last integral in (2) by residues gives Richardson’s formula [15]for the moments: M = ( j + 1)a ··· a a ¯ , (4) k 0 j j j +...+ j +k 0 0 k k ( j ,..., j )≥(0,...,0) 1 k This is a highly nonlinear relationship between the coefficients of f and the moments, and even if f is a polynomial of low degree it is virtually impossible to invert it, to obtain a = a (M , M ,...), as would be desirable in many situations. Still there k k 0 1 is, quite remarkably, an explicit expressions for the Jacobi determinant of the change (a , a ,...) → (M , M ,...) when f restricted to the class of polynomials of a fixed 0 1 0 1 degree. This formula, which was proved by to Kuznetsova and Tkachev [13,21]after an initial conjecture of Ullemar [22], will be discussed in depth below, and it is the major tool for the main result of this paper, Theorem 1. There are examples of different simply connected domains having the same har- monic moments, see for example [17,18,26]. Restricting to domains having analytic boundary the harmonic moments are however sensitive for at least small variations of the domain. This can easily be proved by potential theoretic methods. Indeed, arguing on an intuitive level, an infinitesimal perturbation of the boundary can be represented by a signed measure sitting on the boundary (this measure representing the speed of infinitesimal motion). The logarithmic potential of that measure is a continuous function in the complex plane, and if the harmonic moments were insensitive for the perturbation then the exterior part of this potential would vanish. At the same time the interior potential is a harmonic function, and the only way all these conditions can be satisfied is that the potential vanishes identically, hence also that the mea- sure on the boundary vanishes. On a more rigorous level, in the polynomial case the 640 B. Gustafsson above mentioned Jacobi determinant is indeed nonzero. Compare also discussions in [16]. The conformal map, with its normalization, is uniquely determined by the image domain  and, as indicated above, the domain is locally encoded in the sequence the moments M , M , M ,.... Thus the harmonic moments can be viewed as local 0 1 2 coordinates in the space of univalent functions, and we may write f (ζ ) = f (ζ ; M , M , M ,...). 0 1 2 In particular, the derivatives ∂ f /∂ M make sense. Now we are in position to define the Poisson bracket. Definition 1 For any two functions f (ζ ) = f (ζ ; M , M , M ,...), g(ζ ) = 0 1 2 g(ζ ; M , M , M ,...) which are analytic in a neighborhood of the unit circle and 0 1 2 are parametrized by the moments we define ∂ f ∂g ∂g ∂ f { f, g}= ζ − ζ . (5) ∂ζ ∂ M ∂ζ ∂ M 0 0 This is again a function analytic in a neighborhood of the unit circle and parametrized by the moments. The Schwarz function [1,20] of an analytic curve  is the unique holomorphic function defined in a neighborhood of  and satisfying S(z) =¯z, z ∈ . When  = f (∂ D), f analytic in a neighborhood of ∂ D, the defining property of S(z) becomes S ◦ f = f , (6) holding identically in a neighborhood of the unit circle. Notice that f and S depend on the moments M , M , M ..., like f . The string equation asserts that 0 1 2 { f, f }=1(7) in a neighborhood of the unit circle, provided f is univalent in a neighborhood of the closed unit disk. This result was first formulated and proved in [25] for the case of conformal maps onto an exterior domain (containing the point of infinity). For conformal maps to bounded domains a proof based on somewhat different ideas and involving explicitly the Schwarz function was given in [5]. For convenience we briefly recall the proof below. Writing (6) more explicitly as f (ζ ; M , M ,...) = S( f (ζ ; M , M ,...); M , M ,...) 0 1 0 1 0 1 ∂ f and using the chain rule when computing gives, after simplification, ∂ M 0 The string equation for polynomials 641 ∂ f ∂ S { f, f }= ζ · ( ◦ f ). (8) ∂ζ ∂ M Next one notices that the harmonic moments are exactly the coefficients in the expan- sion of a certain Cauchy integral at infinity: 1 w ¯ dw M = (|z| >> 1). k+1 2πi z − w z k=0 Combining this with the fact that the jump of this Cauchy integral across ∂ is z ¯ it follows that S(z) equals the difference between the analytic continuations of the exterior (z ∈  ) and interior (z ∈ ) functions defined by the Cauchy integral. Therefore S(z; M , M ,...) = + function holomorphic in , 0 1 k+1 k=0 and so, since M , M ,... are independent variables, 0 1 ∂ S 1 (z; M , M ,...) = + function holomorphic in . 0 1 ∂ M z Inserting this into (8) one finds that { f, f } is holomorphic in D. Since the Poisson bracket is invariant under holomorphic reflection in the unit circle it follows that { f, f } is holomorphic in the exterior of D (including the point of infinity) as well, hence it must be constant. And this constant is found to be one, proving (7). We wish to extend the above to allow non-univalent analytic functions in the string equation. Then the basic ideas in the above proof still work, but what may happen is that f and S are not determined by the moments M , M ,... alone. Since ∂ f /∂ M is 0 1 0 a partial derivative one has to specify all other independent variables in order to give a meaning to it. So there may be more variables, say f (ζ ) = f (ζ ; M , M ,... ; B , B ,...). (9) 0 1 1 2 Then the meaning of the string equation depends on the choice of these extra variables. Natural choices turn out to be locations of branch points, i.e., one takes B = f (ω ), j j where the ω ∈ D denote the zeros of f inside D. One good thing with choosing the branch points as additional variables is that keeping these fixed, as is implicit in the notation ∂/∂ M , means that f in this case can be viewed as a conformal map into a fixed Riemann surface, which will be a branched covering over the complex plane. But there are also other possibilities of giving a meaning to the string equation, for example by restricting f to the class of polynomials of a fixed degree, as we shall do in this paper. Then one must allow the branch points to move, so this gives a different meaning to ∂/∂ M . 0 642 B. Gustafsson 3 Intuition and physical interpretation in the non-univalent case We shall consider also non-univalent analytic functions as conformal maps, then into Riemann surfaces above C. In general these Riemann surfaces will be branched cov- ering surfaces, and the non-univalence is then absorbed in the covering projection. It is easy to understand that such a Riemann surface, or the corresponding conformal map, will in general not be determined by the moments M , M , M ,... alone. 0 1 2 As a simple example, consider an oriented curve  in the complex plane encircling the origin twice (say). In terms of the winding number, or index, 1 dζ ν (z) = (z ∈ C \ ), (10) 2πi ζ − z this means that ν (0) = 2. Points far away from the origin have index zero, and some other points may have index one (for example). Having only the curve  available it is natural to define the harmonic moments for the multiply covered (with multiplicities ν ) set inside  as 1 1 k k M = z ν (z)dxdy = z zd ¯ z, k = 0, 1, 2,... . π 2πi It is tempting to think of this integer weighted set as a Riemann surface over (part of) the complex plane. However, without further information this is not possible. Indeed, since some points have index ≥ 2 such a covering surface will have to have branch points, and these have to be specified in order to make the set into a Riemann surface. And only after that it is possible to speak about a conformal map f . Thus f is in general not determined by the moments alone. In the simplest non-univalent cases f will be (locally) determined by the harmonic moments together with the location of the branch points. In principle these branch points can be moved freely within regions a constant values (≥ 2) of ν . However, if we restrict f to belong to some restricted class of functions, like polynomials of a fixed degree, it may be that the branch points cannot move that freely. Thus restricting the structure of f can be an alternative to adding new parameters B , B ,... as in (9). This is a way to understand our main result, 1 2 Theorem 1 below. In the following two examples, the first illustrates a completely freely moving branch point, while in the second example the branch point is still free, but moving it forces also the boundary curve f (∂ D) to move. Example 1 Let a 1 −¯aζ f (ζ ) =− · ζ · , |a| ζ − a where |a| > 1. This function maps D onto D covered twice, so the above index function is ν = 2χ . Thus the corresponding moments are M = 2, M = M = M = ··· = 0, 0 1 2 3 The string equation for polynomials 643 independent of the choice of a, which hence is a free parameter which does not affect the moments. The same is true for the branch point B = f (ω) = a|a| 1 − 1 − , (11) |a| where ω = a 1 − 1 − |a| is the zero of f in D. Thus this example confirms the above idea that the branch point can be moved freely without this affecting the image curve f (∂ D) or the moments, while the conformal map itself does depend on the choice of branch point. Example 2 A related example is given by ζ − 2/a ¯ + a/|a| f (ζ ) = cζ · , ζ − a still with |a| > 1. The derivative of this function is (ζ − 1/a ¯ )(ζ − 2a + 1/a ¯ ) f (ζ ) = c · , (ζ − a) which vanishes at ζ = 1/a ¯. The branch point is B = f (1/a ¯ ) = ac/|a| . Also in this case there is only one nonzero moment, but now for a different rea- son. What happens in this case is that the zero of f in D coincides with a pole of the holomorphically reflected function f , and therefore annihilates that pole in the appropriate residue calculation. (In the previous example the reason was that both poles of f were mapped by f onto the same point, namely the origin.) The calcula- tion goes as follows: for any analytic function g in D, integrable with respect to | f | , we have 1 1 2 ∗ g(ζ )| f (ζ )| dζ dζ = g(ζ ) f (ζ ) f (ζ )dζ 2πi 2πi D ∂ D ∗  ∗ = Res g(ζ ) f (ζ ) f (ζ )dζ + Res g(ζ ) f (ζ ) f (ζ )dζ ζ =0 ζ =1/a ¯ = A · g(0) + 0 · g(1/a ¯ ) = Ag(0), 2 2 2 k where A =¯a (2|a| − 1)B . Applied to the moments, i.e. with g(ζ ) = f (ζ ) ,this gives M = A, M = M = ··· = 0. 0 1 2 2 2 2 Clearly we can vary either a or B freely while keeping M =¯a (2|a| − 1)B fixed, so there is again two free real parameters in f for a fixed set of moments. We remark that this example has been considered in a similar context by Sakai [19], and that f (ζ ) is a contractive zero divisor in the sense of Hedenmalm [9,10]. One 644 B. Gustafsson way to interpret the example is to say that f (D) represents a Hele-Shaw fluid region caused by a unit source at the origin when this has spread on the Riemann surface of z − B. See Examples 5.2 and 5.3 in [4]. The physical interpretation of the string equation is most easily explained with reference to general variations of analytic functions in the unit disk. Consider an arbitrary smooth variation f (ζ ) = f (ζ, t ), depending on a real parameter t.We always keep the normalization f (0, t ) = 0, f (0, t)> 0, and f is assumed to be analytic in a full neighborhood of the closed unit disk, with f = 0on ∂ D. Then one may define a corresponding Poisson bracket written with a subscript t: ∂ f ∂g ∂g ∂ f { f, g} = ζ − ζ . (12) ∂ζ ∂t ∂ζ ∂t This Poisson bracket is itself an analytic function in a neighborhood of ∂ D.Itis determined by its values on ∂ D, where we have { f, f } = 2Re[ f ζ f ]. The classical Hele-Shaw flow moving boundary problem, or Laplacian growth, is a particular evolution, characterized (in the univalent case) by the harmonic moments being conserved, except for the first one which increases linearly with time, say as M = 2t + constant. This means that f = 2∂ f /∂ M , which makes { f, f } = 0 0 t 2{ f, f } and identifies the string Eq. (7) with the Polubarinova–Galin equation Re [ f (ζ, t ) ζ f (ζ, t )]= 1,ζ ∈ ∂ D, (13) for the Hele-Shaw problem. Dividing (13)by | f | gives ζ f 1 Re [ f · ]= on ∂ D. |ζ f | |ζ f | Here the left member can be interpreted as the inner product between f and the unit normal vector on ∂ = f (∂ D), and the right member as the gradient of a suitably normalized Green’s function of  = f (D) with pole at the origin. Thus (13) says that ∂ moves in the normal direction with velocity |∇G |, and for the string equation the interpretation becomes ∂ f ∂G 2 = on ∂, normal ∂ M ∂n the subscript “normal” signifying normal component when considered as a vector on ∂. The general Poisson bracket (12) enters when differentiating the formula (2)for the moments M with respect to t for a given evolution. For a more general statement in this respect we may replace the function f (ζ ) appearing in (2)byafunction g(ζ, t ) which is analytic in ζ and depends on t in the same way as h( f (ζ, t )) does, where h is analytic, for example h(z) = z . This means that g = g(ζ, t ) has to satisfy The string equation for polynomials 645 g ˙(ζ, t ) f (ζ, t ) = , (14) g (ζ, t ) f (ζ, t ) saying that g “flows with” f and locally can be regarded as a time independent function in the image domain of f . We then have (cf. Lemma 4.1 in [4]) Lemma 1 Assume that g(ζ, t ) is analytic in ζ in a neighborhood of the closed unit disk and depends smoothly on t in such a way that (14) holds. Then 2π 1 d 1 2 ∗ g(ζ, t )| f (ζ, t )| dζ dζ = g(ζ, t ){ f, f } dθ, (15) 2πi dt 2π D 0 iθ the last integrand being evaluated at ζ = e . As a special case, with g(ζ, t ) = h( f (ζ, t )),wehave Corollary 1 If h(z) is analytic in a fixed domain containing the closure of f (D, t ) then 2π 1 d 1 2 ∗ h( f (ζ, t ))| f (ζ, t )| dζ dζ = h( f (ζ, t )){ f, f } dθ. 2πi dt 2π D 0 Proof The proof of (15) is straight-forward: differentiating under the integral sign and using partial integration we have d d 2 ∗  ∗  ∗  ∗ ¯ ˙ ˙ g| f | dζ dζ = gf f dζ = gf ˙ f + g f f + gf f dζ dt dt D ∂ D ∂ D ∗  ∗   ∗ ∗ ˙ ˙ ˙ = gf ˙ f + g f f − g f f − g( f ) f dζ ∂ D dζ ∗ ∗  ∗  ∗ ˙ ˙ ˙ = (gf ˙ − fg ) f + g( f f − ( f ) f ) dζ = g ·{ f, f } , ∂ D ∂ D which is the desired result. 4 The string equation for polynomials We now focus on polynomials, of a fixed degree n + 1: j +1 f (ζ ) = a ζ , a > 0. (16) j 0 j =0 646 B. Gustafsson The derivative is of degree n, and we denote its coefficients by b : n n j j f (ζ ) = b ζ = ( j + 1)a ζ , (17) j j j =0 j =0 It is obvious from Definition 1 that whenever the Poisson bracket (5) makes sense (i.e., whenever ∂ f /∂ M makes sense), it will vanish if f has zeros at two points which are reflections of each other with respect to the unit circle. Thus the string equation cannot hold in such cases. The main result, Theorem 1, says that for polynomial maps this is the only exception: the string equation makes sense and holds whenever f and f have no common zeros. Two polynomials having common zeros is something which can be tested by the classical resultant, which vanishes exactly in this case. Now f is not really a polyno- n ∗ mial, only a rational function, but one may work with the polynomial ζ f (ζ ) instead. Alternatively, one may use the meromorphic resultant, which applies to meromorphic functions on a compact Riemann surface, in particular rational functions. Very briefly expressed, the meromorphic resultant R(g, h) between two meromorphic functions g and h is defined as the multiplicative action of one of the functions on the divisor of the other. The second member of (18) below gives an example of the multiplicative action of h on the divisor of g. See [7] for further details. We shall need the meromorphic resultant only in the case of two rational functions n n j −k of the form g(ζ ) = b ζ and h(ζ ) = c ζ , and in this case it is closely j k j =0 k=0 related to the ordinary polynomial resultant R (see [23]) for the two polynomials pol g(ζ ) and ζ h(ζ ). Indeed, denoting by ω ,...,ω the zeros of g, the divisor of g is 1 n the formal sum 1 · (ω ) + ··· + 1 · (ω ) − n · (∞), noting that g has a pole of order 1 n n at infinity. This gives the meromorphic resultant, and its relation to the polynomial resultant, as h(ω ) · ··· · h(ω ) 1 1 n R(g, h) = = R (g(ζ ), ζ h(ζ )). (18) pol n n h(∞) b c 0 0 The main result below is an interplay between the Poisson bracket, the resultant and the Jacobi determinant between the moments and the coefficients of f in (16). The theorem is mainly due to Kuznetsova and Tkachev [13,21], only the statement about the string equation is (possibly) new. One may argue that this string equation can actually be obtained from the string equation for univalent polynomials by “analytic continuation”, but we think that writing down an explicit proof in the non-univalent case really clarifies the nature of the string equation. In particular the proof shows that the string equation is not an entirely trivial identity. Theorem 1 With f a polynomial as in (16), the identity ¯ ¯ ∂(M ,... M , M , M ,..., M ) 2 n 1 0 1 n n +3n+1  ∗ = 2a R( f , f ) (19) ∂(a ¯ ,..., a ¯ , a , a ,..., a ) n 1 0 1 n holds generally. It follows that the derivative ∂ f /∂ M makes sense whenever R( f , f ) = 0, and then also the string equation The string equation for polynomials 647 { f, f }=1(20) holds. Proof For the first statement we essentially follow the proof given in [6], but add some details which will be necessary for the second statement. Using Corollary 1 we shall first investigate how the moments change under a general variation of f , i.e., we let f (ζ ) = f (ζ, t ) depend smoothly on a real parameter t. Thus a = a (t ), M = M (t ), and derivatives with respect to t will often be denoted by j j k k a dot. For the Laurent series of any function h(ζ ) = c ζ we denote by coeff (h) i i the coefficient of ζ : 1 h(ζ )dζ coeff (h) = c = . i i i +1 2πi ζ |ζ |=1 By Corollary 1 we then have, for k ≥ 0, 2π d 1 d 1 k 2 k ∗ M = f (ζ, t ) | f (ζ, t )| dζ dζ = f { f, f } dθ k t dt 2πi dt 2π D 0 k ∗ k ∗ = coeff ( f { f, f } ) = coeff ( f ) · coeff ({ f, f } ). 0 t +i −i t i =0 k ∗ Note that f (ζ ) contains only positive powers of ζ and that { f, f } contains powers with exponents in the interval −n ≤ i ≤ n only. In view of (16) the matrix v = coeff ( f )(0 ≤ k, i ≤ n) (21) ki +i is upper triangular, i.e., v = 0for 0 ≤ i < k, with diagonal elements being powers ki of a : v = a . kk Next we shall find the coefficients of the Poisson bracket. These will involve the coefficients b and a ˙ , but also their complex conjugates. For a streamlined treatment k j it is convenient to introduce coefficients with negative indices to represent the complex conjugated quantities. The same for the moments. Thus we define, for the purpose of this proof and the forthcoming Example 3, ¯ ¯ M = M , a =¯a , b = b (k > 0). (22) −k k −k k −k k Turning points are the real quantities M and a = b . 0 0 0 In this notation the expansion of the Poisson bracket becomes ∗  ∗ ∗ −1 ˙ ˙ { f, f } = f (ζ ) · ζ f (ζ ) + f (ζ ) · ζ f (ζ ) (23) t 648 B. Gustafsson − j j − + j + j ˙ ¯ = b a ¯ ζ + b a ˙ ζ = b a ˙ ζ + b a ˙ ζ j j j j , j ≥0 , j ≤0 ≥0, j ≤0 ≤0, j ≥0 ⎛ ⎞ + j −i ⎝ ⎠ = b a ˙ + b a ˙ ζ = b a ˙ + b a ˙ ζ . 0 0 j 0 0 j · j ≤0 i · j ≤0, + j =−i The last summation runs over pairs of indices ( , j ) having opposite sign (or at least one of them being zero) and adding up to −i. We presently need only to consider the case i ≥ 0. Eliminating and letting j run over those values for which · j ≤ 0we therefore get coeff ({ f, f } ) = b a ˙ δ + b a ˙ + b a ˙ . −i t 0 0 i0 −(i + j ) j −(i + j ) j j ≤−i j ≥0 Here δ denotes the Kronecker delta. Setting, for i ≥ 0, ij b + b δ δ , if − n ≤ j ≤−i or 0 ≤ j ≤ n, −(i + j ) 0 i0 0 j u = ij 0 in remaining cases we thus have coeff ({ f, f } ) = u a ˙ . (24) −i t ij j j =0 Turning to the complex conjugated moments we have, with k < 0, −k ∗ ˙ ¯ M = M = coeff ( f ) · coeff ({ f, f } ). k −k −i +i t i =−n Set, for k < 0, i ≤ 0, −k v = coeff ( f ). ki −i −k Then v = 0 when k < i ≤ 0, and v = a . To achieve the counterpart of (24)we ki kk define, for i ≤ 0, b + b δ δ , if − n ≤ j ≤ 0or − i ≤ j ≤ n, −(i + j ) 0 i0 0 j u = ij 0 in remaining cases. This gives, with i ≤ 0, coeff ({ f, f } ) = u a ˙ . +i t ij j j =0 The string equation for polynomials 649 As a summary we have, from (21), (24) and from the corresponding conjugated equations, M = v u a ˙ , −n ≤ k ≤ n, (25) k ki ij j −n≤i, j ≤n where v = coeff ( f ) when 0 ≤ k ≤ i, ki +i −k v = coeff ( f ) when i ≤ k < 0, ki −i v = 0 in remaining cases, ki u = b + b δ δ in index intervals made explicit above, ij −(i + j ) 0 i0 0 j u = 0 in remaining cases. ij We see that the full matrix V = (v ) is triangular in each of the two blocks along ki the main diagonal and vanishes completely in the two remaining blocks. Therefore, its determinant is simply the product of the diagonal elements. More precisely this becomes n(n+1) det V = a . (26) The matrix U = (u ) represents the linear dependence of the bracket { f, f } on ij t f and f , and it acts on the column vector with components a ˙ , then representing the ˙ ˙ linear dependence on f and f . The computation started at (23) can thus be finalized as ∗ −i { f, f } = u a ˙ ζ . (27) t ij j −n≤i, j ≤n Returning to (25), this equation says that the matrix of partial derivatives ∂ M /∂a k j equals the matrix product VU, in particular that ∂(M ,... M , M , M ,..., M ) −n −1 0 1 n = det V · det U. ∂(a ,..., a , a , a ,..., a ) −n −1 0 1 n The first determinant was already computed above, see (26). It remains to connect det U to the meromorphic resultant R( f , f ). ∗  ∗ For any kind of evolution, { f, f } vanishes whenever f and f have a common zero. The meromorphic resultant R( f , f ) is a complex number which has the same vanishing properties as { f, f } , and it is in a certain sense minimal with this property. From this one may expect that the determinant of U is simply a multiple of the resultant. 2n+1 Taking homogenieties into account the constant of proportionality should be b , times possibly some numerical factor. The precise formula in fact turns out to be 2n+1  ∗ det U = 2b R( f , f ). (28) One way to prove it is to connect U to the Sylvester matrix S associated to the n ∗ polynomial resultant R ( f (ζ ), ζ f (ζ )). This matrix is of size 2n × 2n.Bysome pol 650 B. Gustafsson operations with rows and columns (the details are given in [6], and will in addition be illustrated in the example below) one finds that det U = 2b det S. From this (28) follows, using also (18). Now, the string equation is an assertion about a special evolution. The string equa- tion says that { f, f } = 1 for that kind of evolution for which ∂/∂t means ∂/∂ M , t 0 ˙ ˙ in other words in the case that M = 1 and M = 0for k = 0. By what has already 0 k been proved, a unique such evolution exists with f kept on the form (16) as long as R( f , f ) = 0. Inserting M = δ in (25)gives k k0 v u a ˙ = δ , −n ≤ k ≤ n. (29) ki ij j k0 −n≤i, j ≤n It is easy to see from the structure of the matrix V = (v ) that the 0:th column of the ki −1 −1 inverse matrix V , which is sorted out when V is applied to the right member in (29), is simply the unit vector with components δ . Therefore (29) is equivalent to k0 u a ˙ = δ , −n ≤ i ≤ n. (30) ij j i0 −n≤ j ≤n Inserting this into (27) shows that the string equation indeed holds. Example 3 To illustrate the above proof, and the general theory, we compute every- thing explicitly when n = 2, i.e., with 2 3 f (ζ ) = a ζ + a ζ + a ζ . 0 1 2 We shall keep the convention (22) in this example. Thus 2 2 f (ζ ) = b + b ζ + b ζ = a + 2a ζ + 3a ζ , 0 1 2 0 1 2 ∗ −1 −2 −3 f (ζ ) = a ζ + a ζ + a ζ , 0 −1 −2 for example. When the Eq. (25) is written as a matrix equation it becomes (with zeros represented by blanks) ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ a b b a ˙ −2 2 0 −2 ⎜ ˙ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ M a a b b b b a ˙ −1 −1 0 2 1 0 −1 −1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ˙ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ M = 1 b b 2b b b a ˙ . (31) 2 1 0 −1 −2 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ˙ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ M a a b b b b a ˙ 0 1 1 0 −1 −2 1 a b b a ˙ M 0 −2 2 0 The string equation for polynomials 651 Denoting the two 5 × 5 matrices by V and U respectively it follows that the corre- sponding Jacobi determinant is ∂(M , M , M , M , M ) −2 −1 0 1 2 = det V · det U = a · det U. ∂(a , a , a , a , a ) −2 −1 0 1 2 Here U can essentially be identified with the Sylvester matrix for the resultant R( f , f ).Tobeprecise, det U = 2b det S, (32) where S is the classical Sylvester matrix associated to the two polynomials f (ζ ) and 2 ∗ ζ f (ζ ), namely ⎛ ⎞ b b 2 0 ⎜ ⎟ b b b b 2 1 0 −1 ⎜ ⎟ S = . ⎝ ⎠ b b b b 1 0 −1 −2 b b 0 −2 As promised in the proof above, we shall explain in this example the column operations on U leading from U to S, and thereby proving (32) in the case n = 2 (the general case is similar). The matrix U appears in (31). Let U , U , U , U , U −2 −1 0 1 2 denote the columns of U. We make the following change of U : 1 1 U → U − (b U + b U − b U − b U ) 0 0 −2 −2 −1 −1 1 1 2 2 2 2b The first term makes the determinant become half as big as it was before, and the other terms do not affect the determinant at all. The new matrix is the 5 × 5matrix ⎛ ⎞ b b 2 0 ⎜ ⎟ b b b b 2 1 0 −1 ⎜ ⎟ ⎜ ⎟ b b b b b , 2 1 0 −1 −2 ⎜ ⎟ ⎝ ⎠ b b b 1 0 −2 which has b in the lower left corner, with the complementary 4 × 4 block being exactly S above. From this (32) follows. The string Eq. (20) becomes, in terms of coefficients and with a ˙ interpreted as ∂a /∂ M , the linear equation j 0 ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ a b b a ˙ 0 2 0 −2 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ a a b b b b a ˙ 0 −1 0 2 1 0 −1 −1 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ 1 b b 2b b b a ˙ = 1 . 2 1 0 −1 −2 0 ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ a a b b b b a ˙ 0 0 1 1 0 −1 −2 1 a b b a ˙ 0 0 −2 2 Indeed, in view of (31) this equation characterizes the a ˙ as those belonging to an ˙ ˙ evolution such that M = 1, M = 0for k = 0. As remarked in the step from (29)to 0 k (30), the first matrix, V , can actually be removed in this equation. 652 B. Gustafsson Acknowledgements The author wants to thank Irina Markina, Olga Vasilieva, Pavel Gumenyuk, Mauricio Godoy Molina, Erlend Grong and several others for generous invitations in connection with the mentioned conference ICAMI 2017, and for warm friendship in general. Some of the main ideas in this paper go back to work by Olga Kuznetsova and Vladimir Tkachev, whom I also thank warmly. Compliance with Ethical Standards Conflict of interest The author declares that he has no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna- tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. References 1. Davis, P.J.: The Schwarz Function and Its Applications. The Mathematical Association of America, Buffalo, NY (1974) 2. Gustafsson, B.: The string equation for nonunivalent functions. arXiv:1803.02030 (2018a) 3. Gustafsson, B.: The String Equation for Some Rational Functions. Trends in Mathematics. Birkhäuser, Basel (2018b) 4. Gustafsson, B., Lin, Y.-L.: Non-univalent solutions of the Polubarinova–Galin equation. arXiv:1411.1909 (2014) 5. Gustafsson, B., Teoderscu, R., Vasil, A.: Classical and Stochastic Laplacian Growth, Advances in Mathematical Fluid Mechanics. Birkhäuser, Basel (2014) 6. Gustafsson, B., Tkachev, V.: On the Jacobian of the harmonic moment map. Complex Anal. Oper. Theory 3(2), 399–417 (2009a) 7. Gustafsson, B., Tkachev, V.G.: The resultant on compact Riemann surfaces. Commun. Math. Phys. 286(1), 313–358 (2009b) 8. Gustafsson, B., Vasil, A.: Conformal and Potential Analysis in Hele-Shaw Cells. Advances in Mathe- matical Fluid Mechanics. Birkhäuser, Basel (2006) 9. Hedenmalm, H., Korenblum, B., Zhu, K.: Theory of Bergman spaces. Graduate Texts in Mathematics, vol. 199. Springer, New York (2000) 10. Hedenmalm, H.: A factorization theorem for square area-integrable analytic functions. J. Reine Angew. Math. 422, 45–68 (1991) 11. Kostov, I.K. Krichever, I., Mineev-Weinstein, M., Wiegmann, P.B., Zabrodin, A.: The τ -function for analytic curves. Random matrix models and their applications, Math. Sci. Res. Inst. Publ., vol. 40, pp. 285–299. Cambridge Univeristy Press, Cambridge (2001) 12. Krichever, I., Marshakov, A., Zabrodin, A.: Integrable structure of the Dirichlet boundary problem in multiply-connected domains. Commun. Math. Phys. 259(1), 1–44 (2005) 13. Kuznetsova, O.S., Tkachev, O.S.: Ullemar’s formula for the Jacobian of the complex moment mapping. Complex Var. Theory Appl. 49(1), 55–72 (2004) 14. Mineev-Weinstein, M., Zabrodin, A.: Whitham-Toda hierarchy in the Laplacian growth problem. J. Nonlinear Math. Phys. 8(suppl), 212–218 (2001) 15. Richardson, S.: Hele-Shaw flows with a free boundary produced by the injection of fluid into a narrow channel. J. Fluid Mech. 56, 609–618 (1972) 16. Ross, J., Nyström, D.W.: The Hele-Shaw flow and moduli of holomorphic discs. Compos. Math. 151(12), 2301–2328 (2015) 17. Sakai, M.: A moment problem on Jordan domains. Proc. Am. Math. Soc. 70(1), 35–38 (1978) 18. Sakai, M.: Domains having null complex moments. Complex Var. Theory Appl. 7(4), 313–319 (1987) 19. Sakai, M.: Finiteness of the Family of Simply Connected Quadrature Domains. Potential Theory, pp. 295–305. Plenum, New York (1988) 20. Shapiro, H.S.: The Schwarz Function and Its Generalization to Higher Dimensions, University of Arkansas Lecture Notes in the Mathematical Sciences, 9. Wiley, New York (1992) 21. Tkachev, V.G.: Ullemar’s formula for the moment map II. Linear Algebra Appl. 404, 380–388 (2005) The string equation for polynomials 653 22. Ullemar, C.: Uniqueness theorem for domains satisfying a quadrature identity for analytic functions. Research Bulletin TRITA-MAT-1980-37. Royal Institute of Technology, Department of Mathematics, Stockholm (1980) 23. van der Waerden, B.L.: Moderne Algebra. Springer, Berlin (1940) 24. Vasilév, A.: From the Hele-Shaw experiment to integrable systems: a historical overview. Complex Anal. Oper. Theory 3(2), 551–585 (2009) 25. Wiegmann, P.B., Zabrodin, A.: Conformal maps and integrable hierarchies. Commun. Math. Phys. 213(3), 523–538 (2000) 26. Zalcman, L.: Some inverse problems of potential theory. Integral Geom 63, 337–350 (1987)

Journal

Analysis and Mathematical PhysicsSpringer Journals

Published: Jun 15, 2018

References