# LECTURE 6

Rational Functions with Positive Power Series Coefficients

Turan once remarked to me that special functions were misnamed. They should be called useful functions. A striking illustration of this comes when considering an old problem of Friedrichs and Lewy, which arose from a finite difference approximation to the wave equation in two space. They conjectured that has positive power series coefficients. Szego  gave a proof using Sonine's integral of the product of three Bessel functions (4.39). He also extended the original problem of Friedrichs and Lewy in two directions, to more variables and to different powers, and solved these problems. Lastly he translated this problem into an equivalent problem about the positivity of integrals of products of Laguerre polynomials. This will be our starting point, since not only can we use results of the last lecture to prove Szego's theorems but stronger positi\ ity theorems are suggested by this formulation and proof.

Laguerre polynomials LjJ(x) have the generating function

To translate (6.1) into a more manageable problem observe that

Thus a problem which was originally complicated because of the intermixing of the variables r, s and f has been changed into another problem with r, s and t separated. However, the problem has not disappeared; it is contained in the integration. The next step is clearly to expand in power series and integrate term by term. This cannot be done directly with e'xl{l ~",/(l — r) since the coefficient of r" is a power series in x which is not easily recognizable. However, the simple reduction

with

Szego  had these formulas and (6.4) suggested Sonine's integral (4.39) to him. He then used (2.45W) as a substitute for the generating function (6.2) and proved the following theorem.

Theorem 6.1. If f{x) = {x - r,) ■ • • (.x - rn), then [/'(l)]"1"1 is an absolutely monotonic function for a =2 — i.e., the coefficients a'ik,, ■ ■ ■ , k„) in (6.5) are nonnegative. These coefficients are strictly positive when n > 4(a + l)/(2a + 1), a > —j, and for n ^ 3 when a Si 0.

gives

and

One integral which we know to be nonnegative and somewhat similar to the integral in (6.7) is Dougall's integral (5.10). Clearly

We shall prove this theorem when n = 3 using Szego's formula

or as it is usually written,

Also we need the limit relation

and then (6.11) gives

The details are given in Askey-Gasper . The strict positivity for a ^ 0 follows from

so that

To use (6.8) there must be a connection between Pi?x)(x) and LjJ(x). This connection is provided by

(6.9) with

Observe that the coefficients of P'„lJ\x) and P^](x) in (6.10) are positive for a, P > — 1. Both of these formulas are quite general and easily motivated (see Askey-Gasper ). Together they lead immediately to the weak version of Theorem 6.1 that à"(k, m, n) 0, a ^ — j. For (6.10) and (6.8) together give

(6.12) J P(tx-'+J>(x)PÜ-'+J'>(x)P(n'-I+J>(x)(l - *P(1 + xf + 3Jdx ^ 0, a ^ -i,

When a ^ 0 all the terms on the right are nonnegative, so it will be sufficient to find one positive term. The term with a = k, b = m, c = 0 is positive since

This integral is dual to the positive kernels considered in Lecture 2. Since ortho­gonal polynomials satisfy second order difference equations in n (the recurrence relation in n) and the classical polynomials of Jacobi, Laguerre and Hermite satisfy second order differential equations in x, it is natural to see what happens to a given formula when x and n are interchanged and integration and summation are substituted for each other. When this is done in formula (6.14) is obtained. For when c = 0, (6.14) is a constant multiple of the discrete delta function and (6.15) is the formal reproducing kernel; i.e., the delta function, when c = 0. So the positivity of (6.14) is not surprising, and it can be proved either by explicitly computing (6.14) or from the differential difference equation it satisfies. For this proof, see Karlin-McGregor  or Szego [9, Problems 81 and 82]. Karlin and McGregor also have given a probabilistic meaning to (6.14) (see Karlin-McGregor ).

From Theorem 4.4.

There are two natural ways of trying to extend the above proof to more than three variables. We can use

t• 1 n

(6.19)  n          - x2fdx è o, a è -2, n = 0, 1, ■ • ■ ,

J -l i

Surprisingly, the dual result to Theorem 6.1 (when stated as the nonnegativity of (6.7)) is completely false. In fact, it is an easy consequence of Sarmanov's Theorem 4.4. For the dual problem is to find a sequence c„ for which

Letting z = 0 gives c„ ^ 0, and then for n ^ 1 there is a z so that Lx„(z) < 0. Then the left-hand side is nonpositive and the right-hand side is nonnegative, and so both are zero. Thus c„ = 0, n = 1, 2, • • ■, and the only way to have (6.16) hold is to have the left-hand side reduce to a constant. This is not the case with respect to Szego's result, sincefor any sequence of /c, , which follows from

ck.m.n = 0 when a ^ This proof goes through with no trouble (see Askey- Gasper ).

Or we can try to iterate directly using (6.18). Another way of stating (6.18) is

with b*(k, m. n) 2: 0 when a 2: — j. Then (6.21) can be iterated to obtain

c*(kj,m,n) ^ 0, a ^ —These two methods lead to different results. The first method leads to

while the second only leads to

The first is much stronger than the second for the positivity of (6.14) leads to the following theorem.

Theorem 6.2. If x > - 1 and

then

unless f(x) = 0, x 2: 0.

It would be nice if a stronger result than (6.21) held, one which did not become weaker than it should under iteration. Such a theorem would follow if we could replace e~2xLa„(x) by e^L^x).

For a = -j this is not possible by a formula of Titchmarsh . But it is for a = 0. More generally, the following theorem holds (see Askey-Gasper ).

Theorem 6.3. If a ^ (-5 + v/l7)/2, then

and the only case of equality is a = ( — 5 + v/l7)/2, k = m = n = 1. Also

The proof of Theorem 6.3 is quite interesting. Recall that (6.4) came from the rational function (6.1) and more generally (6.7) came from the expansion of

(6.26)

However, it does not seem to be possible to prove Theorem 6.1 directly from this function. Kaluza  gave an involved proof of Theorem 6.1 when a = 0, n = 3 without using special functions, but so far his proof has not been extended to prove all of Theorem 6.1. With respect to (6.25) the opposite is true. I do not know how to prove (6.25) directly, but it can be translated back to a problem involving power series coefficients of an algebraic function,

The function on the left-hand side of (6.27) can be expanded in a power series using the multinomial theorem, and the coefficient of rksmt" is a positive multiple of

The left-hand side can also be written as

and the function (6.26) can be written as

(6.30) 3F2(-k, -m, —n; ( —a - k- m- n)/2,(1 - a - k - m - n)/2:1).

When a = — j this function is Saalschiitzian, i.e., it is

with a + i> + c+ l= <f + e and one of a, b, c is a negative integer, so the series terminates. Using Saalschiitz's formula (Bailey ) this 3F2 is a positive multiple of(-l)'[+m + T(m + n - k + i)I~(m + k-n + i)I"(n + k - m + j)and this is not nonnegative for all k, m, n. This is equivalent to a formula of Titchmarsh . For

Rummer  proved (6.33) by changing variables (let t = 1 ^ y) in the integral

and simplifying to get

a = 0,1, ■ • • the 3^2(6.30) has not been summed but it is not very hard to prove it is positive. Assume k ^ m, k g n and reverse the sum in (6.27) to obtain

This changes a sum whose terms have five factors which change sign when going from one term to the next, to a series with only one factor which changes sign per term, but we have to show that (- l)k}F2 is positive. This can be done when a = 0 by use of the Kummer-Thomae-Whipple formula

Watson's proof of (6.32) is particularly simple. He used

This formula of Euler  is one of the basic formulas for hypergeometric functions. For example, as Dougall  remarked, Saalschiitz's formula comes from multi­plying the two functions on the right and equating coefficients. Euler  proved (6.33) by changing variables in the differential equation satisfied by b \d ;x) :Then the symmetry of 2F,(a, b\d:x) in a and b was used, and (6.35) was used a second time to obtain (6.33). Finally (6.33) is multiplied by xc~ '(1 - x)*~c-1 and integrated from x = 0 to x = 1, and formula (6.32) is obtained. This is one of a large number of transformation formulas which were systematically found by Thomae  and later by Whipple . Whipple also organized these formulas into a manageable group. However (6.32) was first stated by Kummer [1, p. 172], and it is likely that the proof which was given above was the method Kummer used to discover this transformation formula.

The coefficient multiplying this }F2 is positive and in the series for the 3F2 there are terms with factors

Finally to complete the proof which was started above use (6.32) on (6.31) to get

The factor (6.37) is positive for « > - j, j = 0, 1, • • • , k and the factor (6.38) is positive for j = 0,1, ■ • •, (n + k — m + a)/2. When a = 0,1, • • •, it vanishes for 2j > n + k — m + a, and this completes the proof of Theorem 6.3 for a = 0,1, • ■ • The deeper result that it holds for a ^ (-5 + ^fll)/! was proved by finding a recurrence relation in n for the hypergeometric function on the right-hand side of (6.36). A method developed by Bailey  to extend results of Watson  was used. The details are fairly complicated, and so will not be given here (see Askey- Gasper ).

A few further results have been obtained. Theorem 6.1 fails for a < — j in a very strong sense, for if -1 < a < — ^ there is no p < oo for which

Also (6.39) fails for p < j for any a and some k, m, n. Both of these results are in Askey-Gasper .

When p — \ there is a weaker result which follows from a result to be mentioned in Lecture 7 (see Askey ).

Theorem 6.4. If a = - j, 0,?, • • • , then

with c*(k, m, n) ^ 0, k, m, n = 0, 1, • • • .

Once again the proof is surprising, for it again reverts to a problem on Jacobi polynomials. And part of the proof was suggested by the proof of Theorem 6.1. The restriction on a should probably be a ^ — and the extraneous condition that 2a is an integer arises because of our lack of knowledge about Jacobi poly­nomials, and ultimately about 4F3's. This will be given in Lecture 7.

It is unclear how these results will ultimately be viewed. There are probably combinatorial ways of looking at them, at least in the case of rational functions, and this should be investigated. There are interesting Banach algebras associated with Theorems 6.1 and 6.3. And finally there is the problem of finding the right geometric space on which these functions live. Peetre  solved this question for e'xl2L„(x) as was mentioned in Lecture 4. Since e~xLn(x) behaves like a set of characters in the sense that products can be written as a sum with positive coefficients, there is probably some space and associated group on which they live. It would be interesting to find it.

Since Laguerre polynomials are limits of Meixner polynomials it would be reasonable to suspect that there is a corresponding theorem for Meixner polyno­mials. This is totally false for if

implies that a(x) = 0, x = 1,2, • • • was proved in Askey-Gasper , and the above analogue of Sarmanov's theorem for Meixner polynomials was proved by Tyan .

Part of the fascination of Szego's result is that it only holds for Laguerre polyno­mials, or to be precise, no other polynomials whose spectral measure has un­bounded support have been shown tojiave a similar type of positive integral. As we saw above, when a ^ ( —5 + *J\l)/2 there is an improvement possible in Szego's theorem, and a similar improvement probably exists for each a > — It is likely that the integral (6.13) is strictly positive for a > —}, and that the factor e'3x can be replaced by e~cM~. c(a) < 3 when a > — ^. It is unclear if e~3x is the best that can be used when a = — It would be interesting to obtain the best possible result in this case. Also there are interesting problems related to rational functions. One which has caused me many hours of frustration is trying to prove that

has positive power series coefficients. So far the most powerful method of treating problems of this type is to translate them into another problem involving special functions and then use the results and methods which have been developed for the last two hundred years to solve the special function problem. So far I have been unable to make a reduction in (6.41) and so have no place to start. However, it is possible to solve some problems without using special functions, so others should not give up on (6.41).

Added in proof. I have been reading some of the early papers on hypergeometric series and need to modify some of the comments in this lecture. Saalschutz had been anticipated by Pfaff , so I propose to rename the series which Whipple called Saalschiitzian. In the future they should be called balanced and, more generally, fc-balanced if the sum of the numerator parameters plus k is the sum of the denominator parameters and the series terminates. See Askey . The formula (6.35) was found by Pfaff[l] and the proof of it given on page 53 was given by Jacobi .