Dorsey Wright says:

“Chart from The Leuthold Group:

Correlation From the Archives: Inherently Unstable Correlations

If you are trying to use this data, would you conclude that higher bond yields are good for the stock market or bad?  The answer is that the correlations are all over the map.  In 2006, William J. Coaker II published The Volatility of Correlations in the FPA Journal.  That paper details the changes in correlations between 15 different asset classes and the S&P 500 over a 34-year time horizon.  To give you a flavor for his conclusions, he pointed out that Real Estate’s rolling 5-year correlations to the S&P 500 ranged from 0.17 to 0.75, and for Natural Resources the range was -0.34 to 0.49.   History is conclusive – correlations are unstable.

This becomes a big problem for strategic asset allocation models that use historical data to calculate an average correlation between securities or asset classes over time.  Those models use that stationary correlation as one of the key inputs into determining how the model should currently be allocated.  That may well be of no help to you over the next five to ten years.  Unstable correlations are also a major problem for “financial engineers” who use their impressive physics and computer programming abilities to identify historical relationships between securities.   They may find patterns in the historical data that lead them to seek to exploit those same patterns in the future (i.e. LTCM in the 1990′s.) The problem is that the future is under no obligation to behave like the past.

Many of the quants are smart enough to recognize that unstable correlations are a major problem.  The solution, which I have heard from several well-known quants, is to constantly be willing to reexamine your assumptions and to change the model on an ongoing basis.  That logic may sound intelligent, but the reality is that many, if not most, of these quants will end up chasing their tail. Ultimately, they end up in the forecasting game.  These quants are rightly worried about when their current model is going to blow up.”

Actually there is even more to it than that.  If you run correlations based on daily, weekly, and monthly data, you can get numbers that are different  in sign using the same two sets of data over the same time period.

Most data that you find for correlation is monthly data, which is because correlation requires a standard deviation calculation, and that number is also typically widely used to calculate “volatility,” and the standard for that is monthly data transformed to an annual rate (Morningstar).  Therefore, correlation also monthly.  Interestingly, the small sample that I used (only 2 ETF data for one year, 2008,) the volatility calculation was very consistent using any of the data time periods.

So correlation based on monthly data appears to be mostly a convention.

My wonderful intern, Aaron Filbeck, searched all summer for more information on this for me, and did turn up a few research articles on this, but it seems there is not a lot out there.

My point is that not only is correlation unstable, but even the measurement of it is unstable.

Unfortunately, we can see the results of it very clearly.





Leave a comment

Filed under Financial

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s