There's an amusing article making its rounds of the internet today, about the successful investment strategy of a cat named Orlando..
A group of people at the Observer put together a fun experiment.
They asked three groups to pretend that they had 5000 pounds, and asked each of them to invest it, however they wanted, in stocks listed on the FTSE. They could only change their investments at the end of a calendar quarter. At the end of the year, they compared the result of the three groups.
Who were the three groups?
- The first was a group of professional investors - people who are, at least in theory, experts at analyzing the stock market and using that analysis to make profitable investments.
- The second was a classroom of students, who are bright, but who have no experience at investment.
- The third was an orange tabby cat named Orlando. Orlando chose stocks by throwing his toy mouse at a
targetboard randomly marked with investment choices.
As you can probably guess by the fact that we're talking about this, Orlando the tabby won, by a very respectable margin. (Let's be honest: if the professional investors came in first, and the students came in second, no one would care.) At the end of the year, the students had lost 160 pounds on their investments. The professional investors ended with a profit of 176 pounds. And the cat ended with a profit of 542 pounds - more than triple the profit of the professionals.
Most people, when they saw this, had an immediate reaction: "see, those investors are a bunch of idiots. They don't know anything! They were beaten by a cat!"
And on one level, they're absolutely right. Investors and bankers like to present themselves as the best of the best. They deserve their multi-million dollar earnings, because, so they tell us, they're more intelligent, more hard-working, more insightful than the people who earn less. And yet, despite their self-alleged brilliance, professional investors can't beat a cat throwing a toy mouse!
It gets worse, because this isn't a one-time phenomenon: there've been similar experiments that selected stocks by throwing darts at a news-sheet, or by rolling dice, or by picking slips of paper from a hat. Many times, when people have done these kinds of experiments, the experts don't win. There's a strong implication that "expert investors" are not actually experts.
Does that really hold up? Partly yes, partly no. But mostly no.
Before getting to that, there's one thing in the article that bugged the heck out of me: the author went out of his/her way to make sure that they defended the humans, presenting their performance as if positive outcomes were due to human intelligence, and negative ones were due to bad luck. In fact, I think that in this experiment, it was all luck.
For example, the authors discuss how the professionals were making more money than the cat up to the last quarter of the year, and it's presented as the human intelligence out-performing the random cat. But there's no reason to believe that. There's no evidence that there's anything qualitatively different about the last quarter that made it less predictable than the first three.
The headmaster at the student's school actually said "The mistakes we made earlier in the year were based on selecting companies in risky areas. But while our final position was disappointing, we are happy with our progress in terms of the ground we gained at the end and how our stock-picking skills have improved." Again, there's absolutely no reason to believe that the students stock picking skills miraculously improved in the final quarter; much more likely that they just got lucky.
The real question that underlies this is: is the performance of individual stocks in a stock market actually predictable, or is it dominantly random. Most of the evidence that I've seen suggests that there's a combination; on a short timescale, it's predominantly random, but on longer timescales it becomes much more predictable.
But people absolutely do not want to believe that. We humans are natural pattern-seekers. It doesn't matter whether we're talking about financial markets, pixel-patterns in a bitmap, or answers on a multiple choice test: our brains look for patterns. If you randomly generate data, and you look at it long enough, with enough possible strategies,
you'll find a pattern that fits. But it's an imposed pattern, and it has no predictive value. It's like the images of jesus on toast: we see patterns in noise. So people see patterns in the market, and they want to believe that it's predictable.
Second, people want to take responsibility for good outcomes, and excuse bad ones. If you make a million dollars betting on a horse, you're going to want to say that it was your superiour judgement of the horses that led to your victory. When an investor makes a million dollars on a stock, of course he wants to say that he made that money because he made a smart choice, not because he made a lucky choice. But when that same investor loses a million dollars, he doesn't want to say that the lost a million dollars because he's stupid; he wants to say that he lost money because of bad luck, of random factors beyond his control that he couldn't predict.
The professional investors were doing well during part of the year: therefore, during that part of the year, they claim that their good performance was because they did a good job judging which stocks to buy. But when they lost money during the last quarter? Bad luck. But overall, their knowledge and skills paid off! What evidence do we have to support that? Nothing: but we want to assert that we have control, that experts understand what's going on, and are able to make intelligent predictions.
The students performance was lousy, and if they had invested real money, they would have lost a tidy chunk of it. But their teacher believes that their performance in the last quarter wasn't luck - it was that their skills had improved. Nonsense! They were lucky.
On the general question: Are "experts" useless for managing investments?
It's hard to say for sure. In general, experts do perform better than random, but not by a huge margin, certainly not by as much as they'd like us to believe. The Wall Street Journal used to do an experiment where they compared dartboard stock selection against human experts, and against passive investment in the Dow Jones Index stocks over a one-year period. The pros won 60% of the time. That's better than chance: the experts knowledge/skills were clearly benefiting them. But: blindly throwing darts at a wall could beat experts 2 out of 5 times!
When you actually do the math and look at the data, it appears that human judgement does have value. Taken over time, human experts do outperform random choices, by a small but significant margin.
What's most interesting is a time-window phenomenon. In most studies, the human performance relative to random choice is directly related to the amount of time that the investment strategy is followed: the longer the timeframe, the better the humans perform. In daily investments, like day-trading, most people don't do any better than random. The performance of day-traders is pretty much in-line with what you'd expect from probability from random choice. Monthly, it's still mostly a wash. But if you look at yearly performance, you start to see a significant difference: humans do typically outperform random choice by a small but definitely margin. If you look at longer time-frames, like 5 or ten years, then you start to see really sizeable differences. The data makes it look like daily fluctuations of the market are chaotic and unpredictable, but that there are long-term trends that we can identify and exploit.