LECTURE 5

Linearization of Products

One consequence of the addition formula for cosines is

(5.1)    cos m9 cos n9 = { cos (n + m)9 + jcos (n — m)ti.

 

 

(5-4)

 

(5-5)

where

(5.6)

and the result of Ferrers [1, Example 10, p. 156] and Adams [1]:

Since cos n6 is a polynomial of degree n in cos 9 a natural generalization of (5.1) is the problem of determining, or at least saying something useful, about the coefficients in

(5.2)

The most important special case of (5.2) is

 

 

(5.3)

Fortunately there are other classical results which are a bit more complicated, and so will tell more about the general type of result that might hold for wide classes of polynomials. For example, there is

In both of these cases, as in (5.1) and (5.3), the linearization coefficients are non- negative. In the case of ultraspherical polynomials an explicit formula is known for these coefficients and the formula is simple enough so that the sign properties are obvious. Explicitly it is Dougall [2] first stated (5.7) but did not give his proof, which he said was com­plicated. Hsu [1] proved (5.7) by induction. Adams [1] remarked that both he and Ferrers found (5.5) by finding the coefficients for small values of m (Adams derived these coefficients for m = 1, 2, 3, 4 in his paper), guessing what the result would be for arbitrary m and n, and proving it by induction. This is not a bad way to discover the formula, but a more systematic method should be found. Neumann [1] found such a method, and it is still the most powerful method which is known. He computed a fourth order differential equation satisfied by P„(x)Pm(x) and used it to set up a recurrence relation for the linearization coefficients. Then he solved this recurrence relation. His argument is given in Hobson [1]. Hylleraas [1] set up a similar equation satisfied by Cl{x)C"m{x), this time a fifth order equation, and obtained Dougall's formula (5.7) in a very natural way. Carlitz [4] proved (5.7) by another natural method. One of the results he used was Gegenbauer's addition formula for Bessel functions (4.36). This is interesting, for Dougall [2] claimed that it was possible to obtain Sonine's integral (4.39) from (5.7) and Hsu [1] gave the details. It is not hard to go from (4.39) to Gegenbauer's addition formula (4.36), so in some sense these results are equivalent. Carlitz's argument is one of the few I know in which one goes from Bessel functions to ultraspherical polynomials, rather than the other way around. One other instance was given in Lecture 3. Vilenkin [2, Chap. 3, § 8] gave a group theoretic proof of (5.5) but his proof of (5.7) was by induction. It should be possible to derive (5.7) directly when 2v is an integer by the method Vilenkin used in the case v = j. Bailey [1] gives a proof of (5.5) by means of Whipple's transformation of a Saalschutzian 4F3 to a well-poised 7F6. A Verma called this reference to my attention. Bailey used the same method to generalize (5.5) to Legendre functions of both the first and the second kind. His results for functions of the second kind are interesting, but for functions of the first kind he does not seem to realize that Dougall's formula (4.7) contains his result. In particular, his comment on page 177 that his formula (3.3) does not generalize to noninteger values of his parameters is misleading, since Dougall's formula (5.7) provides a partial extension. This is just an instance of the fact that ultraspherical polynomials provide a nicer extension of Legendre polynomials than Legendre functions do for many problems. Dougall [3] gave yet another proof of (5.5). It is quite interesting.

 

For Jacobi polynomials the situation was far from satisfactory (we have seen this before). Hylleraas [1] had set up a recurrence relation for the coefficients in

(5.8)

and he was able to solve this equation not only when a = /? but when a = /? + 1. Again the coefficients were products of gamma functions. In the general case the most compact formula which has been obtained is as a double sum, Miller [2], and this sum has proved to be too complicated to use. Gasper [2], [3] used the recurrence relation of Hylleraas to prove the following theorem.

Theorem 5.1. The coefficients in (5.8) are nonnegative for all k, m, n if and only if a ^ 0 > -1 and

(a + 0 + l)(a + 0 + 4)2(a + 0 + 6) ^ (a - 0)2[(a + 0 + l)2 - 7(a + 0 + 1) - 24].

In particular the nonnegativity of the coefficients holds for a ^ 0 > — l,a + 0 2: — 1.

Applications of these results are given in Hirschman [2], Davis-Hirschman [1], Askey-Wainger [1], [2], Askey-Gasper [3], and Gasper [3].

 

For orthogonal polynomials another way of stating the problem of the signs of the coefficients in (5.2) is to consider the sign behavior of

(5.9)

Dougall's formula (5.7) can be written as

 

(5.10)

 

when k + m + n = 2s is even and a triangle with sides k, m, n exists, and the integral is zero in all other cases. The numbers a„ are defined by

 

 

In addition to the fact that (5.10) will be used in the next lecture, it is sometimes useful to consider (5.9) instead of (5.2). This arises when polynomials on a finite set of points are considered. For example, consider the Krawtchouk polynomials

 

 

These polynomials are orthogonal on {0,1, • • • , N} with respect to the binomial

IN\

 

distribution I I px(l — pf JC. The right problem to consider for them is

 

(5.11)

 

Eagleson [1] proved (5.11) for ^ g p < 1 by using the generating function

(5.12)

For 0 < p < j he obtained a similar result by considering

 

 

The inequality (5.11) can also be stated as

 

with

 

 

 

Observe that when m + n > N,(5.13) does not give the equality of two polynomials for all x, but only for x = 0,1, ■ • • , N. This is why (5.11) is a more natural result than (5.13). A similar generating function argument can be used to show that

 

And since all of these discrete polynomials are self-dual, i.e.,

 

 

 

each of the results (5.11), (5.18), (5.19) gives a linearization result of the type con­sidered in the last lecture. Eagleson [1] gave an application of his result to find the values of r for which the kernel (2.51) for Krawtchouk polynomials is non- negative. When j ^ p < 1 it is nonnegative for 1 — 1/p ^ r ^ 1. Recall that we stated a similar theorem for the Poisson kernel for Jacobi series in Lecture 2.

The Hermite result can be stated as

 

 

This formula was used by Ginibre [1]. He only needed the nonnegativity of the coefficients.

 

Consider boxes A, B, and C which hold k, m, and n objects respectively. The objects in A are labeled a,, ■ • •, ak; in B, i>,, • • •, bm; and in C, c,, ■ • ■, c„. How many ways can these objects be moved so that no element is in the box it started in and each box has as many elements after the moves as before? We distinguish between the objects a,, • ■ •, ak but not the places they hold in the box they move to. For example, if k = 2, m = 2, n = 1, the two cases bl,cl ,al, a2',b2 and bl,c1; a2,at; b2 are not distinguished; but the two cases b,,c, ;a,,a2; b2 and b2, c,; al,a1;bl are treated as different.

 

 

This was pointed out by J. Gillis after these lectures had been given. G. Andrews showed me how to derive this using MacMahon's master theorem. Much more is true. The integrals

 

 

give the number of derangements (or displacements) of n, -+- • ■ • + ns objects when there are nl of the first type, n2 of the second, ■ ■ ■ , nN of the Nth type. For the generating function of the integrals (5.20) and of the derangements is

J. Gillis and S. Even independently found a proof of this formula using generating functions and recurrence relations. The classical problem of derangements of n different objects is easily solved by this formula, for it is

See Gillis-Weiss [1] for recurrence relations connecting adjacent integrals of the form (5.17), and MacMahon [1, § III, Chaps. II, III and IV] for other combinatorial results. One of them is equivalent to

 

 

a result which 1 find far from obvious.

 

where Pi are the elementary symmetric functions defined by

 

(see MacMahon [ 1, § 68] for a derivation of the combinatorial generating function). The orthogonal polynomial generating function can easily be derived by use of

 

 

and the integral can easily be evaluated by integration by parts. For

 

It seems likely that other interesting combinatorial connections can be developed for the corresponding sum (5.19) of Meixner polynomials.So far we have treated the classical orthogonal polynomials as hypergeometric functions. Nowhere have we really used orthogonality except in the most elementary way. Now we shall treat orthogonal polynomials directly, and thus the results in the rest of this lecture are not restricted to the classical polynomials.

Polynomials orthogonal on an infinite set satisfy

 

 

p0(x) = 1, p,(x) = .x + c, c and a'„ real and b„ > 0, n = 1,2, • ■ • . Favard [1] (see also Atkinson [1], Freud [1]) proved the converse, that a set of polynomials which satisfies (5.21) is orthogonal with respect to a positive measure. This measure is not necessarily unique because a moment problem does not necessarily have a unique solution. The measure has bounded support if and only if a'n and bn are bounded, and in this case the measure is uniquely determined. When the poly­nomials are orthogonal on a bounded set, if the linearization coefficients are nonnegative, and if a slight boundedness condition is satisfied, then there is a natural Banach algebra associated with the polynomials which is analogous to /' of the circle (see Askey [4], Askey-Wainger [2], Askey-Gasper [1], Gasper [3] and Igari-Uno [1] for examples).

 

If a„ k 0, b„ ^ 0, a„ + l ^ an and i>„ + , :> b„, n = 1,2,--, then a(k, m, n) ^ 0.

Proof. By symmetry assume m <.n and that a(k,j, I) ^ 0, j = 0, 1, •• ■ , m,j < I. Then

 

The first three terms can be written as sums of pk(x) times nonnegative coefficients by the induction assumption and the monotonicity of ak and bk. Also bn ^ 0 so

To connect the recurrence relation (5.21) with the problem we have treated add cp„(x) to both sides and rewrite the recurrence relation as

 

 

This is just the case m = 1 of (5.2). We shall prove the following theorem (see Askey [4]).

Theorem 5.2. Let p„(x) satisfy (5.22), n = 1,2, - • • , and define the linearization coefficients byand all these coefficients are nonnegative by induction and assumptions.

It is possible to state the theorem in a different way so that it is seen to be a discrete analogue of Weinberger's maximum theorem for hyperbolic equations (see Askey [3]).

Surprisingly this simple argument gives a theorem which is strong enough to include most of the examples given above. The two results which are not con­tained in Theorem 5.2 are Eagleson's result for Krawtchouk polynomials and Gasper's result for Jacobi polynomials when a and /? are small. When a ^ /J and a + P ^ 1 the assumptions of Theorem 5.1 are satisfied. It is likely that there is a stronger theorem than this one which will give the same conclusion for x S: /? 5; —j. Recall that Koomwinder was able to modify Weinberger's argument to obtain the dual result under these conditions. However, I have been unable to find such a theorem. Part of the problem comes from the argument, which proves too much. It shows that certain coefficients are positive because they have increased from something which was nonnegative. And this is too strong a conclusion.

it is sufficient to prove that

 

with c(j) ^ 0. This follows from another induction. For

 

 

while for Pollaczek polynomials which are not Jacobi polynomials, the weight

There are other interesting sets of polynomials orthogonal on a bounded set to which Theorem 5.2 applies. The most interesting is the set of Pollaczek poly­nomials (see Szego [9, Appendix] and references given there). These polynomials satisfy a recurrence relation of the form (5.21) with a'„ and b„ rational functions. If the polynomials defined by (5.21) are orthogonal on [ — 1, l]and the weight function is positive there, then lim„- K an = 0 and lim„_ x bn = \ (see Szego [9, Chap. 12]). In the case of Jacobi polynomials an — 0( 1/h2) and bn — ^ + 0(1/a2), while in the case of the Pollaczek polynomials which are not Jacobi polynomials, a„ = c/n + 0(l/n2) and b„ = z + d/n + 0(l/n2) with either c # 0 or d # 0. We can ask if this is always true in the following sense. In the case of Jacobi polynomials,

One additional reason for thinking this is true is that polynomials grow rapidly at points where the weight function vanishes. And this growth is reflected in the recurrence relation. The difference may come from £ n ~2 < ocand^rc-1 = +oc.

function vanishes so rapidly at x = 1 that

 

Interesting integral representations have been found for the Pollaczek poly­nomials which are close to Legendre polynomials but not for the other polynomials. These should be found and the asymptotics worked out. Also it is likely that Pollaczek polynomials provide a solution to a problem of Turan. This is to find a weight function w(x) > 0, — 1 < x < 1, w(x) = 0, |x| > 1 for which the Lagrange interpolation polynomials at the zeros of the corresponding orthogonal poly­nomials diverge in L" for all p > 2 for some continuous function. For each fixed p > 2 there is a value of a for which convergence fails when w(x) = (1 — x)a, and as p -» 2, a must increase to infinity (see Askey [8]). This strongly suggests that any of the Pollaczek weight functions will give an example. There are many other problems which these polynomials suggest. Novikoff has never published his thesis [1], but a summary exists in the appendix to Szego [9], and a copy can be obtained from University Microfilms. It is a good place to start