Wednesday, January 12, 2011

That guy who called the big one? Don’t listen to him. Inside the paradox of forecasting

In 2006, a somewhat obscure economist stood before a room full of peers at the International Monetary Fund and let loose with some good old-fashioned doomsaying. The United States was about to get hit with a ghastly housing bust, he said. The price of oil was about to skyrocket, and a particularly nasty recession was on its way, bringing with it untold ruin and misery for citizens, bankers, and businesspeople all over the world. The prophesy was dismissed initially as the mutterings of a pessimistic crank. A year later, he was proved right beyond all doubt. “He sounded like a madman in 2006,” an economist who had attended the talk later told The New York Times. “He was a prophet when he returned in 2007.”

That economist was New York University’s Nouriel Roubini. And since he called the Great Recession, he has become about as close to a household name as an economist can be without writing “Freakonomics” or being Paul Krugman. He’s been called a seer, been brought in to counsel heads of state and titans of industry — the one guy who connected the dots while the rest of us were blithely taking out third mortgages and buying investment properties in Phoenix. He’s a sought-after source for journalists, a guest on talk shows, and has even acquired a nickname, Dr. Doom. With the effects of the Great Recession still being keenly felt, Roubini is everywhere.

But here’s another thing about him: For a prophet, he’s wrong an awful lot of the time. In October 2008, he predicted that hundreds of hedge funds were on the verge of failure and that the government would have to close the markets for a week or two in the coming days to cope with the shock. That didn’t happen. In January 2009, he predicted that oil prices would stay below $40 for all of 2009, arguing that car companies should rev up production of gas-guzzling SUVs. By the end of the year, oil was a hair under $80, Hummer was on its way out, and automakers were tripping over themselves to develop electric cars. In March 2009, he predicted the S&P 500 would fall below 600 that year. It closed at over 1,115, up 23.5 percent year over year, the biggest single year gain since 2003.

How can this be? How can someone with the insight to be so right about a major event be so wrong about so many other ones? According to a recent study, it’s simple: The people who successfully predict extreme events, and are duly garlanded with accolades, big book sales, and lucrative speaking engagements, don’t do so because their judgment is so sharp. They do it because it’s so bad.

Predicting the future is essential to modern life. When we buy a house, we’re essentially predicting that the surrounding neighborhood isn’t about to go to seed; when we start a business, we’re predicting that what we’re selling will find a buyer; when we marry, we’re predicting our mate won’t turn into an appalling, intolerable bore. Every decision, from going to a party, to voting, to professing belief in a higher power, is tightly bound to our confidence about what will happen next.

We reserve a special place in society for those who promise genuine insights into the future — who can predict what will happen in business, in sports, in politics, technology, and so on. The media landscape is rich with these experts; Wall Street pays millions of dollars every year to analysts to put a precise dollar figure on next year’s company earnings. Those who manage to get a few big calls right are rewarded handsomely, either in terms of lucrative gigs or the adoration of a species that so needs to believe that the future is in fact predictable.

But are such people really better at predicting the future than anyone else? In October of last year, Oxford economist Jerker Denrell cut directly to the heart of this question. Working with Christina Fang of New York University, Denrell dug through the data from The Wall Street Journal’s Survey of Economic Forecasts, an effort conducted every six months, in which roughly 50 economists are asked to make macroeconomic predictions about gross national product, unemployment, inflation, and so on. They wanted to see if the economists who successfully called the most unexpected events, like our Dr. Doom, had better records over the long term than those who didn’t.

To find the answer, Denrell and Fang took predictions from July 2002 to July 2005, and calculated which economists had the best record of correctly predicting “extreme” outcomes, defined for the study as either 20 percent higher or 20 percent lower than the average prediction. They compared those to figures on the economists’ overall accuracy. What they found was striking. Economists who had a better record at calling extreme events had a worse record in general. “The analyst with the largest number as well as the highest proportion of accurate and extreme forecasts,” they wrote, “had, by far, the worst forecasting record.”

By way of illustration, the authors cite the case of one Sung Won Sohn. Sung, a successful businessman who was then the CEO of Hanmi Financial Group, had made headlines with his forecasting prowess. After visiting a company that claimed it couldn’t meet the demand for $250 jeans, he had a hunch that “there must be money out there” and hiked his predictions on growth and inflation for 2005, even as other economists were predicting a drop in inflation and weaker growth. He was right, and his predictions won him the top spot among economists in the Journal’s survey for the year. This would have been a testament to some impressive intuitive faculties, had he not placed 43d and 49th out of 55 in the previous two years. It wasn’t just Sung who came in for a beating. Across the board, Denrell and Fang found that poor forecasters are more likely to make bold predictions, and therefore, like the proverbial broken clock that is right twice a day, “they are also more likely to make extreme forecasts that turn out to be accurate.”

Their work is the latest in a long line of research dismantling the notion that predictions are really worth anything. The most notable work in the field is “Expert Political Judgment” by Philip Tetlock of the University of Pennsylvania. Tetlock analyzed more than 80,000 political predictions ventured by supposed experts over two decades to see how well they fared as a group. The answer: badly. The experts did about as well as chance. And the more in-demand the expert, the bolder, and thus the less accurate, the predictions. Research by a handful of others, Denrell included, suggests the same goes for economic forecasters. An accurate prediction — of an extreme event or even a series of nonextreme ones — can beget overconfidence, which can lead to making bolder and bolder bets, and thus, more and more errors.

So it has gone with Roubini. That one big call about the Great Recession gave him an unrivaled platform from which to issue ever more predictions, and a grand job title to match his prominence, but his subsequent predictions suggest that his foresight may be no better than your average man on the street. The curious nature of his fame calls to mind two of economist Edgar Fiedler’s wry rules for economic forecasters: “If you must forecast, forecast often,” he wrote. And: “If you’re ever right, never let ’em forget it.”

There’s no great, complex explanation for why people who get one big thing right get most everything else wrong, argues Denrell. It’s simple: Those who correctly predict extreme events tend to have a greater tendency to make extreme predictions; and those who make extreme predictions tend to spend most of the time being wrong — on account of most of their predictions being, well, pretty extreme. There are few occurrences so out of the ordinary that someone, somewhere won’t have seen them coming, even if that person has seldom been right about anything else.

But that leads to a more disconcerting question: If this is true, why do we put so much stock in expert forecasters? In a saner world than ours, those who listen to forecasters would take into account all their incorrect predictions before making a judgment. But real life doesn’t work that way. The reason is known in lab parlance as “base rate neglect.” And what it means, essentially, is that when we try to predict what’s next, or determine whether to believe a prediction, we often rely too heavily on information close at hand (a recent correct prediction, a new piece of data, a hunch) and ignore the “base rate” (the overall percentage of blown calls and failures).

And success, as Denrell revealed in an earlier study, is an especially bad teacher. In 2003 he published a paper arguing that when people study success stories exclusively — as many avid devourers of business self-help books do — they come away with a vastly oversimplified idea of what it takes to succeed. This is because success is what economists refer to as a “noisy signal.” It’s chancy, fickle, and composed of so many moving parts that any one is basically meaningless in the context of the real world. By studying what successful ventures have in common (persistence, for instance), people miss the invaluable lessons contained in the far more common experience of failure. They ignore the high likelihood that a company will flop — the base rate — and wind up wildly overestimating the chances of success.

To look at Denrell’s work is to realize the extent to which our judgment can be warped by our bias toward success, even when failure is statistically the default setting for human endeavor. We want to believe success is more probable than it is, that it’s the result of a process we can wrap our heads around. That’s why we’re drawn to prophets, especially the ones who get one big thing right. We want to believe that someone, somewhere can foresee surprising and disruptive change. It means that there is a method to the madness of not just business, but human existence, and that it’s perceptible if you look at it from the right angle. It’s why we take lucky rabbits’ feet into casinos instead of putting our money in a CD, why we quit steady jobs to start risky small businesses. On paper, these too may indeed resemble sucker bets placed by people with bad judgment. But cast in a certain light, they begin to look a lot like hope.

By Joe Keohane

January 9, 2011

Saturday, January 1, 2011

P. Arthur Huprich's list of rules

• Commandment #1: “Thou Shall Not Trade Against the Trend.”

• Portfolios heavy with underperforming stocks rarely outperform the stock market!

• There is nothing new on Wall Street. There can’t be because speculation is as old as the hills. Whatever happens in the stock market today has happened before and will happen again, mostly due to human nature.

• Sell when you can, not when you have to.

• Bulls make money, bears make money, and “pigs” get slaughtered.

• We can’t control the stock market. The very best we can do is to try to understand what the stock market is trying to tell us.

• Understanding mass psychology is just as important as understanding fundamentals and economics.

• Learn to take losses quickly, don’t expect to be right all the time, and learn from your mistakes.

• Don’t think you can consistently buy at the bottom or sell at the top. This can rarely be consistently done.

• When trading, remain objective. Don’t have a preconceived idea or prejudice. Said another way, “the great names in Trading all have the same trait: An ability to shift on a dime when the shifting time comes.”

• Any dead fish can go with the flow. Yet, it takes a strong fish to swim against the flow. In other words, what seems “hard” at the time is usually, over time, right.

• Even the best looking chart can fall apart for no apparent reason. Thus, never fall in love with a position but instead remain vigilant in managing risk and expectations. Use volume as a confirming guidepost.

• When trading, if a stock doesn’t perform as expected within a short time period, either close it out or tighten your stop-loss point.

• As long as a stock is acting right and the market is “in-gear,” don’t be in a hurry to take a profit on the whole positions. Scale out instead.

• Never let a profitable trade turn into a loss, and never let an initial trading position turn into a long-term one because it is at a loss.

• Don’t buy a stock simply because it has had a big decline from its high and is now a “better value;” wait for the market to recognize “value” first.

• Don’t average trading losses, meaning don’t put “good” money after “bad.” Adding to a losing position will lead to ruin. Ask the Nobel Laureates of Long-Term Capital Management.

• Human emotion is a big enemy of the average investor and trader. Be patient and unemotional. There are periods where traders don’t need to trade.

• Wishful thinking can be detrimental to your financial wealth.

• Don’t make investment or trading decisions based on tips. Tips are something you leave for good service.

• Where there is smoke, there is fire, or there is never just one cockroach: In other words, bad news is usually not a one-time event, more usually follows.

• Realize that a loss in the stock market is part of the investment process. The key is not letting it turn into a big one as this could devastate a portfolio.

• Said another way, “It’s not the ones that you sell that keep going up that matter. It’s the one that you don’t sell that keeps going down that does.

The table below depicts the percentage gain necessary to get back even, after a certain percentage loss.

• Your odds of success improve when you buy stocks when the technical pattern confirms the fundamental opinion.

• As many participants have come to realize from 1999 to 2010, during which the S&P 500 has made no upside progress, you can lose money even in the “best companies” if your timing is wrong. Yet, if the technical pattern dictates, you can make money on a short-term basis even in stocks that have a “mixed” fundamental opinion.

• To the best of your ability, try to keep your priorities in line. Don’t let the “greed factor” that Wall Street can generate outweigh other just as important areas of your life. Balance the physical, mental, spiritual, relational, and financial needs of life.

• Technical analysis is a windsock, not a crystal ball. It is a skill that improves with experience and study. Always be a student, there is always someone smarter than you!