Chaos and Fractals in Financial Markets

Part 5

by J. Orlin Grabbe

Louis Bachelier Visits the New York Stock Exchange

Louis Bachelier, resurrected for the moment, recently visited the New York Stock Exchange at the end of May 1999. He was somewhat puzzled by all the hideous concrete barriers around the building at the corner of Broad and Wall Streets. For a moment he thought he was in Washington, D.C., on Pennsylvania Avenue.

Bachelier was accompanied by an angelic guide named Pete. "The concrete blocks are there because of Osama bin Ladin," Pete explained. "He’s a terrorist." Pete didn’t bother to mention the blocks had been there for years. He knew Bachelier wouldn’t know the difference.

"Terrorist?"

"You know, a ruffian, a scoundrel."

"Oh," Bachelier mused. "Bin Ladin. The son of Ladin."

"Yes, and before that, there was Abu Nidal."

"Abu Nidal. The father of Nidal. Hey! Ladin is just Nidal spelled backwards. So we’ve gone from the father of Nidal to the son of backwards-Nidal?"

"Yes," Pete said cryptically. "The spooks are never too creative when they are manufacturing the boogeyman of the moment. If you want to understand all this, read about ‘Goldstein’ and the daily scheduled ‘Two Minutes Hate’ in George Orwell’s book 1984."

"1984? Let’s see, that was fifteen years ago," Bachelier said. "A historical work?"

"Actually, it’s futuristic. But he who controls the present controls the past, and he who controls the past controls the future."

Bachelier was mystified by the entire conversation, but once they got inside and he saw the trading floor, he felt right at home. Buying, selling, changing prices. The chalk boards were now electric, he saw, and that made the air much fresher.

"Look," Bachelier said, "the Dow Jones average is still around!"

"Yes," nodded Pete, "but there are a lot of others also. Like the S&P500 and the New York Stock Exchange Composite Index."

"I want some numbers!" Bachelier exclaimed enthusiastically. Before they left, they managed to con someone into giving them the closing prices for the NYSE index for the past 11 days.

"You can write a book," Pete said. "Call it Eleven Days in May. Apocalyptic titles are all the rage these days—except in the stock market."

Bachelier didn’t pay him any mind. He had taken out a pencil and paper and was attempting to calculate logarithms through a series expansion. Pete watched in silence for a while, before he took pity and pulled out a pocket calculator.

"Let me show you a really neat invention," the angel said.

Bachelier’s Scale for Stock Prices

Here is Bachelier’s data for eleven days in May. We have the calendar date in the first column of the table; the NYSE Composite Average, S(t), in the second column; the log of S(t) in the third column; the change in log prices, x(t) = log S(t) – log S(t-1) in the fourth column; and x(t)2 in the last column. The sum of the variables in the last column is given at the bottom of the table.

Date

S(t)

log S(t)

x(t)

x(t)2

May 14

638.45

6.459043

May 17

636.92

6.456644

-.002399

.000005755

May 18

634.19

6.452348

-.004296

.000018456

May 19

639.54

6.460749

.008401

.000070577

May 20

639.42

6.460561

-.000188

.000000035

May 21

636.87

6.456565

-.003996

.000015968

May 24

626.05

6.439430

-.017135

.000293608

May 25

617.34

6.425420

-.014010

.000196280

May 26

624.84

6.437495

.012075

.000145806

May 27

614.02

6.420027

-.017468

.000305131

May 28

622.26

6.433358

.013331

.000177716

sum of all x(t)2 = .001229332

What is the meaning of all this?

The variables x(t), which are the one-trading-day changes in log prices, are the variables in which Bachelier is interested for his theory of Brownian motion as applied to the stock market:

x(t) = log S(t) – log S(t-1).

Bachelier thinks these should have a normal distribution. Recall from Part 4 that a normal distribution has a location parameter m and a scale parameter c. So what Bachelier is trying to do is to figure out what m and c are, assuming that each day’s m and c are the same as any other day’s.

The location parameter m is easy. It is zero, or pretty close to zero.

In fact, it is not quite zero. Essentially there is a drift in the movement of the stock index S(t), given by the difference between the interest rate (such as the broker-dealer loan rate) and the dividend yield on stocks in the average.[1] But this is tiny over our eleven trading days (which gives us ten values for x(t)). So Bachelier just assumes m is zero.

So what Bachelier is doing with the data is trying to estimate c.

Recall from Part 2 that if today’s price is P, Bachelier modeled the probability interval around the log of the price change by

(log P – a T0.5, log P + a T0.5), for some constant a.

But now, we are writing our stock index price as S, not P; and the constant a is just our scale parameter c. So, changing notation, Bachelier is interested in the probability interval

(log Sc T0.5, log S + c T0.5), for a given scale parameter c.

One way of estimating the scale c (c is also called the "standard deviation" in the context of the normal distribution) is to add up all the squared values of x(t), and take the average (by dividing by the number of observations). This gives us an estimate of the variance, or c2. Then we simply take the square root to get the scale c itself. (This is called a maximum likelihood estimator for the standard deviation.)

Adding up the terms in the right-hand column in the table gives us a value of .001229332. And there are 10 observations. So we have

variance = c2 = .001229332/10 = .0001229332.

Taking the square root of this, we have

standard deviation = c = (.0001229332)0.5 = .0110875.

So Bachelier’s changing probability interval for log S becomes:

(log S.0110875 T0.5, log S + .0110875 T0.5).

To get the probability interval for the price S itself, we just take exponentials (raise to the power exp = e = 2.718281…), and get

( S exp(– .0110875 T0.5), S exp(.0110875 T0.5) ).

Since the current price on May 28, from the table, is 622.26, this interval becomes:

(622.26 exp(– .0110875 T0.5), 622.26 exp(.0110875 T0.5) ).

"This expression for the probability interval tells us the probability distribution over the next T days," Bachelier explained to Pete. "Now I understand what you meant. He who controls the present controls the past, because he can obtain past data. While he who masters this past data controls the future, because he can calculate future probabilities!"

"Umm. That wasn’t what I meant," the angel replied. "But never mind."

Over the next 10 trading days, we have T0.5 = 100.5 = 3.162277. So substituting that into the probability interval for price, we get

(622.26 (.965545), 622.26 (1.035683)) = (600.82, 644.46).

This probability interval gives a price range for plus or minus one scale parameter (in logs) c. For the normal distribution, that corresponds to 68 percent probability. With 68 percent probability, the price will lie between 600.82 and 644.46 at the end of 10 more trading days, according to this calculation.

To get a 95 percent probability interval, we use plus or minus 2c,

(622.26 exp(– (2) .0110875 T0.5), 622.26 exp( (2) .0110875 T0.5) ),

which gives us a price interval over 10 trading days of

(580.12, 667.46).

Volatility

In the financial markets, the scale parameter c is often called "volatility". Since a normal distribution is usually assumed, "volatility" refers to the standard deviation.

Here we have measured the scale c, or volatility, on a basis of one trading day. The value of c we calculated, c = .0110875, was calculated over 10 trading days, so it would be called in the markets "a 10-day historical volatility." If calculated over 30 past trading days, it would be "a 30-day historical volatility."

However, market custom would dictate two criteria by which volatility is quoted:

  1. quote volatility at an annual (not daily) rate;

  2. quote volatility in percentage (not decimal) terms.

To change our daily volatility c = .0110875 into annual terms, we note that there are about 256 trading days in the year. The square root of 256 is 16, so to change daily volatility into annual volatility, we simply multiply it by 16:

annual c = 16 (daily c) = 16 (.0110875) = .1774.

Then we convert this to percent (by multiplying by 100 and calling the result "percent"):

annual c = 17.74 percent.

The New York Stock Exchange Composite Index had a historical volatility of 17.74 percent over the sample period during May.

Note that an annual volatility of 16 percent corresponds to a daily volatility of 1 percent. This is a useful relationship to remember, because we can look at a price or index, mentally divide by 100, and say the price change will fall in the range of plus or minus that amount with 2/3 probability (approximately). For example, if the current gold volatility is 16 percent, and the price is $260, we can say the coming day’s price change will fall in the range of plus or minus $2.60 with about 2/3 probability.

Notice that 256 trading days give us a probability interval that is only 16 times as large as the probability interval for 1 day. This translates into a Hausdorff dimension for time (in the probability calculation) as D = log(16)/log(256) = ½ or 0.5, which is just the Bachelier-Einstein square-root-of-T (T0.5) law.

The way we calculated the scale c is called "historical volatility," because we used actual historical data to estimate c. In the options markets, there is another measure of volatility, called "implied volatility." Implied volatility is found by back-solving an option value (using a valuation formula) for the volatility, c, that gives the current option price. Hence this volatility, which pertains to the future (specifically, to the future life of the option) is implied by the price at which the option is trading.

Fractal Sums of Random Variables

Now for the fun part. We have been looking at random variables x(t) (representing changes in the log of price).

Under the assumption these random variables were normal, we estimated a scale parameter c, which allows us to do probability calculations.

In order to estimate c, we took the sums of random variables (or, in this instance, the sums of squares of x(t)).

Were our calculations proper and valid? Do they make any sense? The answer to these questions depends on the issue of the probability distribution of a sum of random variables. How does the distribution of the sum relate to the distributions of the individual random variables that are added together?

In answering this question we want to focus on ways we can come up with a location parameter m, and a scale parameter c. For the normal distribution, m is the mean, but for the Cauchy distribution the mean doesn’t exist ("is infinite"). For the normal distribution, the scale parameter c is the standard deviation, but for the Cauchy distribution the standard deviation doesn’t exist. Nevertheless, a location m and a scale c exist for the Cauchy distribution. The maximum likelihood estimator for c will not be the same in the case of the Cauchy distribution as it was for the normal. We can’t take squares if the x(t) have a Cauchy distribution.

Suppose we have n random variables Xi, all with the same distribution, and we calculate their sum X:

X = X1 + X2 + … + Xn-1 + Xn.

Does the distribution of the sum X have a simple form? In particular, can we relate the distribution of X to the common distribution of the Xi? Let’s be even more specific. We have looked at the normal (Gaussian) and Cauchy distributions, both of which were parameterized with a location m and scale c. If each of the Xi has a location m and scale c, whether normal or Cauchy, can that information be translated into a location and a scale for the sum X?

The answer to all these questions is yes, for a class of distributions called stable distributions. (They are also sometimes called "Levy stable", "Pareto-Levy", or "stable Paretian" distributions.) Both the normal and the Cauchy are stable distributions. But there are many more.

We will use the notation "~" as shorthand for "has the same distribution as." For example,

X1 ~ X2

means X1 and X2 have the same distribution. We now use "~" in the following definition of stable distributions:

Definition: A random variable X is said to have a stable distribution if for any n >= 2 (greater than or equal to 2), there is a positive number Cn and a real number Dn such that

X1 + X2 + … + Xn-1 + Xn ~ Cn X + Dn

where X1, X2, …, Xn are all independent copies of X.

Think of what this definition means. If their distribution is stable, then the sum of n identically distributed random variables has the same distribution as any one of them, except by multiplication by a scale factor Cn and a further adjustment by a location Dn .

Does this remind you of fractals? Fractals are geometrical objects that look the same at different scales. Here we have random variables whose probability distributions look the same at different scales (except for the add factor Dn).

Let’s define two more terms.[2]

Definition: A stable random variable X is strictly stable if Dn = 0.

So strictly stable distributions are clearly fractal in nature, because the sum of n independent copies of the underlying distribution looks exactly the same as the underlying distribution itself, once adjust by the scale factor Cn. One type of strictly stable distributions are symmetric stable distributions.

Definition: A stable random variable X is symmetric stable if its distribution is symmetric—that is, if X and -X have the same distribution.

The scale parameter Cn necessarily has the form [3]:

Cn = n1/ a , where 0< a <=2.

So if we have n independent copies of a symmetric stable distribution, their sum has the same distribution with a scale that is n1/ a times as large.

For the normal or Gaussian distribution, a = 2. So for n independent copies of a normal distribution, their sum has a scale that is n1/ a = n1/ 2 times as large.

For the Cauchy distribution, a = 1. So for n independent copies of a Cauchy distribution, their sum has a scale that is n1/ a = n1/ 1 = n times as large.

Thus if, for example, Brownian particles had a Cauchy distribution, they would scale not according to a T0.5 law, but rather according to a T law!

Notice that we can also calculate a Hausdorff dimension for symmetric stable distributions. If we divide a symmetric stable random variable X by a scale factor of c = n1/ a , we get the probability equivalent [4] of N = n copies of X/n1/ a . So the Hausdorff dimension is

D = log N/ log c = log n/ log(n1/ a ) = a .

This gives us a simple interpretation of a . The parameter a is simply the Hausdorff dimension of a symmetric stable distribution. For the normal, the Hausdorff dimension is equal to 2, equivalent to that of a plane. For the Cauchy, the Hausdorff dimension is equal to 1, equivalent to that of a line. In between is a full range of values, including symmetric stable distributions with Hausdorff dimensions equivalent to the Koch Curve (log 4/log 3) and the Sierpinski Carpet (log 8/log3).

Some Fun with Logistic Art

Now that we’ve worked our way to the heart of the matter, let’s take a break from probability theory and turn our attention again to dynamical systems. In particular, let’s look at our old friend the logistic equation:

x(n+1) = k x(n) [1 – x(n)],

where x(n) is the input variable, x(n+1) is the output variable, and k is a constant.

In Part 1, we looked at a particular version of this equation where k = 4. In general, k takes values 0 < k <= 4.

The dynamic behavior of this equation depends on the value k, and also on the particular starting value or starting point, x(0). Later in this series we will examine how the behavior of this equation changes as we change k. But not now.

Instead, we are going to look at this equation when we substitute for x, which is a real variable, a complex variable z:

z(n+1) = k z(n) [1 – z(n)].

Complex numbers z have the form

z = x + i y,

where i is the square root of minus one. Complex numbers are normally graphed in a plane, with x on the horizontal ("real") axis, while y is on the vertical ("imaginary") axis.

That means when we iterate z, we actually iterate two values: x in the horizontal direction, and y in the vertical direction. The complex logistic equation is:

x + i y = k (x + i y) [ 1 – (x + i y)].

(Note that I have dropped the notation x(n) and y(n) and just used x and y, to make the equations easier to read. But keep in mind that x and y on the left-hand side of the equation represent output, while the x and y on the right-hand side of the equation represent input.)

The output x, the real part of z, is composed of all the terms that do not multiply i, while the output y, the imaginary part of z, is made up of all the terms that multiply i.

To complete the transformation of the logistic equation, we let k be complex also, and write

k = A + B i,

giving as our final form:

x + i y = (A + B i) (x + i y) [ 1 – (x + i y)].

Now we multiply this all out and collect terms. The result is two equations in x and y:

x = A (x-x2+y2) + B (2xy-y)
y = B (x-x2+y2) - A (2xy-y).

As in the real version of the logistic equation, the behavior of the equation depends on the multiplier k = A + B i (that is, on A and B), as well as the initial starting value of z = x + i y (that is, on x(0) and y(0) ).

Julia Sets

Depending on k, some beginning values z(0) = x(0) + i y(0) blow off to infinity after a certain number of iterations. That is, the output values of z keep getting larger and larger, diverging to infinity. As z is composed of both an x term and a y term, we use as the criterion for "getting large" the value of

x2 + y2.

The square root of this number is called the modulus of z, and represents the length of a vector from the origin (0,0) to the point z = (x,y). In the iterations we are about to see, the criterion to determine if the equation is diverging to infinity is

x2 + y2 > 4,

which implies the modulus of z is greater than 2.

When the equation is iterated, some starting values diverge to infinity and some don’t. The Julia set is the set of starting values for z that remain finite under iteration. That is, the Julia set is the set of all starting values (x(0), y(0)) such that the equation output does not blow off to infinity as the equation is iterated.

Each value for k will produce a different Julia set (i.e., a different set of (x(0) ,y(0) ) values that do not diverge under iteration).

Let’s do an example. Let k = 1.678 + .95 i. That is, A = 1.678 and B = .95. We let starting values for x(0) range from –.5 to 1.5, while letting starting values for y(0) range from -.7 to +.7.

We keep k constant always, so we are graphing the Julia set associated with k = 1.678 + .95 i.

We iterate the equation 256 times. If, at the end of 256 iterations, the modulus of z is not greater than 2, we paint the starting point (x(0), y(0)) black. So the entire Julia set in this example is colored black. If the modulus of z exceeds 2 during the iterations, the starting point (x(0), y(0)) is assigned a color depending on the rate the equation is blowing off to infinity.

To see the demonstration, be sure Java is enabled on your web browser and click here.

We can create a plot that looks entirely different by making a different color assignment. For the next demonstration, we again iterate the dynamical system 256 times for different starting values of z(n). If, during the iterations, the modulus of z exceeds 2, then we know the iterations are diverging, so we plot the starting value z(0) = (x(0), y(0)) black. Hence the black region of the plot is made up of all the points not in the Julia set. For the Julia set itself, we assign bright colors. The color assigned depends on the value of z after 256 iterations. For example, if the square of the modulus of z is greater than .6, but less than .7, the point z(0) is assigned a light red color. Hence the colors in the Julia set indicate the value of the modulus of z at the end of 256 iterations.

To see the second demonstration of the same equation, but with this alternative color mapping, be sure Java is enabled on your web browser and click here

So, from the complex logistic equation, a dynamical system, we have created a fractal. The border of the Julia set is determined by k in the equation, and this border was created in a working Euclidean space of 2 dimensions, has a topological dimension of 1, but has a Hausdorff dimension that lies between these two numbers.

Meanwhile, we have passed from mathematics to art. Or maybe the art was there all along. We just had to learn how to appreciate it.

Notes

[1] This is the stock market equivalent of the Interest Parity Theorem that relates the forward price F(t+T) of a currency, T-days in the future, to the current spot price S(t). In the foreign exchange market, the relationship can be written as:

F(t+T) = S(t) [1 + r (T/360)]/[1+r*(T/360)]

where r is the domestic interest rate (say the dollar interest rate), and r* is the foreign interest rate (say the interest rate on the euro). S is then the spot dollar price of the euro, and F is the forward dollar price of the euro.

We can also use this equation to give us the forward value F of a stock index in relation to its current value S, in which case r* must be the dividend yield on the stock index.

(A more precise calculation would disaggregate the "dividend yield" into the actual days and amounts of dividend payments.)

This relationship is explored at length in Chapter 5, "Forwards, Swaps, and Interest Parity," in J. Orlin Grabbe, International Financial Markets, 3rd edition, Prentice-Hall, 1996.

[2] The definitions here follow those in Gennady Samorodnitsky and Murad S. Taqqu, Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance, Chapman & Hall, New York, 1994.

[3] This is Theorem VI.1.1 in William Feller, An Introduction to Probability Theory and Its Applications, Vol 2, 2nd ed., Wiley, New York, 1971.

[4] If Y = X/n1/ a , then for n independent copies of Y,

Y1 + Y2 + … + Yn-1 + Yn ~ n1/ a Y = n1/ a (X/n1/ a ) = X.


J. Orlin Grabbe is the author of International Financial Markets, and is an internationally recognized derivatives expert. He has recently branched out into cryptology, banking security, and digital cash. His home page is located at http://www.aci.net/kalliste/homepage.html .

-30-

from The Laissez Faire City Times, Vol 3, No 29, July 19, 1999