2.4. Metric entropy

In 1958 Kolmogorov introduced a new metric invariant called entropy which enabled isomorphisms between disimilar Bernoulli shifts to be established. The Kolmogorov (K-) or metric entropy is a means of quantifying the «.ype of chaotic or sensitive behaviour displayed in dynamical systems like (1.1) or (2.3).

If a mapping T possesses a smooth invariant measure fi then the pair (T, tu) constitute a measurable dynamical system. The metric entropy describes the behaviour of T~x by an invariant which combines aspects of the mapping's instability with its probabilistic aspects. Suppose we have a discrete one dimensional system T: [0, l]t). This could be the Poincar6 return mapping of some two-dimensional flow

 

 

 

t A weaker property, weak mixing, requires only that

If there is sensitive dependence on initial conditions, neighbouring trajectories (solutions) will diverge

cxpoaeafiyuader iteration ot/r. 75ra processw® etrne memory dt wrtf«}condfttatts as m (1.Tj. If neighbouring trajectories diverge like e*" on the average (taken over the phase space) the K-entropy is

just h. If h > o then chaotic behaviour occurs.

If two solution trajectories are initially separated by ® Sx0 then their subsequent separation can be represented formally at time t by

 

 

Suppose that the trajectory position at time t is given by x, and is linked to the initial state by an

 

y here y is defined by its action on any function iff(x),

 

Now define the average rate of divergence of neighbouring solution trajectories as

 

ihen from ^2.25)-(2.30) we have, using the chain rule, that

 

where we have averaged over the invariant measure preserved by T (solution of (2.10)), then

operator so

where (• • •) is the expectation over the initial ensemble of possibilities for That is, (2.29) indicates that 'on the average' near-by trajectories in phase space diverge as <jxp(ht). If we set

 

 

 

 

 

This is the metric or K-entropy of the one-dimensional mapping (2.25). Analogous results hold for higher dimensional mappings.!

t Note ihat (2 24) only makes sense for trajectories separated by dimension less than the extent of the pha« . «ce, |A,| < I here. This separation will result after an evolution time - A"1

The simple example (1.1) preserves Lebesgae measure because it is tinea* and hfa, T>-'-cgl.

Formaf chaos is said to exist in a system with h >0 and it will be isomorphic [17] to a Bernoulli process having the same entropy. Such systems, although deterministic, are not predictive because of their sensitivity to initial data. If (T, fi) and (f, fi) are two dynamical systems with T: / -»/ and f:! t they will be called iso' Orphic if there exists a one to one map g : I->i such that

 

 

In this case it ca. be shown from (2.32) that h(T, n)= h(t\ /Î).

Taking the expectation of th^ information loss over the invariant measure, ju(.x). preserved by T we obtain

because

 

Another interesting way of viewing the metric entropy is from the perspective of information theory [18]. During the evolution of systems with non-zero entropy information is lost at each time step as the initial conditions wear off. If rr(y, jc) is the joint probability that . 1(x0)= x and Tn~'(.v<i) = y then the information loss during T iteration is given by the Shannon [19] formula

 

 

 

 

 

Memory of the initial conditions (n = 0) is completely lost after T) iterations. Any process

described by (T,/i) can reasonably be called deterministic if a knowledge of all its past states {x,} would allow almost all the future outputs to be predicted with probability one. The dynamical system (T.^) would not be predictive in this sense unless h = 0.