Simple Models Outperform Experts

 In Uncategorized

Throughout history, across an ever growing list of fields, behavioral science research has repeatedly pointed towards the superiority of model-driven, systematic decision making over expert-driven, ‘gut-instinct’ decision making. By this time, you are probably aware of the flawed nature of human behavior. Anxious to know just how flawed human behavior is,  Grove and Meehl (1996) looked at more than 130 different studies concerning the predictions of a mechanical versus clinical method. The former involves a formal, algorithmic procedure–like an equation–to reach decisions, whereas the latter relies on human judgment that is based on informal contemplation and discussion with others. Whether it’s psychologists predicting bipolar disease amongst patients, parole boards predicting likelihood of criminal recidivism amongst parolees, or loan officers predicting bankruptcy amongst borrowers Grove and Meehl found the mechanical, model-driven method superior. In short, they conclude that the prediction from simple models are almost invariably equal to, or superior that of human judgement.

What happens when experts were given the model forecast before making their own prediction, do the results change, you may ask? Blending expert judgement or intuition with the output from a mechanical model is intuitively a good idea. The evidence from Grove and Meehl, who decided to further investigate this point, shows somewhat counter-intuitively, however, that while the information from the model output improves on human predictions, it is still inferior to the purely mechanical method. In summary, they conclude that humans destroy the benefits of systematic, rules-based models the minute they try to ‘improve’ on their predictions.

Despite a trove of academic evidence pointing towards the predictive superiority of models over human judgment, financial market participants have been slow to adapt. Perhaps investors, with their fancy statistics and complex math, are much more sophisticated than doctors and judges (hint; they’re not!). Intrigued by this assertion Joel Greenblatt set out to empirically test if the superiority of a systematic, rules-based  approach over human judgement hold true in financial markets as well. Specifically, his study compared the performance of a portfolio strictly consisting of stocks identified by Greenblatt’s Magic Formula to the average performance of the portfolios from a group of experts who, through their own judgement and intuition got to choose between following the model’s predictions or overriding them. To many investors much despair, Greenblatt’s study supports the notion that trying to improve on the predictions of mechanical methods works counterintuitively, as the simple model portfolios significantly outperformed the average experts portfolio over the testing period.

Recent Posts

Leave a Comment

Start typing and press Enter to search