Banking on the Facts - what is expert advice worth?


When moving on to my next assignment a few years ago I received the visit of three of my new colleagues. I was quite surprised to hear the first confess that “the first thing you have to learn here is that everyone lies about everything”. The second visitor wasn’t any more inspiring in vouching that she alone told the truth. The third visitor — as they say in French “jamais deux sans trois” — tried to “reassure” me that previous two would be struck by lightning if they ever spoke honestly. In a Smullyanian world in which people are either eternally truthful or disingenuous, which of my three visitors was to be believed?[i]

This story still stands out in my mind in a post-truth world increasingly marked by fake news and alternative facts. Although cases of colleagues blatantly distorting the data are gratefully rare, separating fact from fiction is far from easy. The complex problems facing management today defy the notion of a single version of the truth. These challenges make the choosing the best course of action all the more valuable, whether it be in qualifying the “intangibles “of business, improving management effectiveness, or measuring customer satisfaction. Enlisting the experts’ opinions doesn’t offer much of a fix, for they often appear more at ease reminiscing about the past than plotting a course for the future. If a manager’s job is to take decisions based on the facts, how can he or she bank on expert advice?

Why do so many experts, even in good faith, offer poor counsel? The handicaps of risk, uncertainty and ambiguity often hinder the expert’s ability to evaluate a new challenge objectively. The reliance on “machine learning” doesn’t solve the problem — for many of the critical variables each algorithm relies on the subjective estimates of these very same experts. Their advised opinions are rarely adjusted for their faith in these models — they calculate objective probabilities without taking into account their own biases, doubts and understanding of the problems at hand. Julia Galef refers to this tendancy as “motivated reasoning”, experts (like most everyone else) tend to privilege data that corresponds to their own state of mind.[ii] To paraphrase the humorist Josh Billings,” Its not what you experts know that the problem, it’s what they how for sure that just isn’t so”.[iii]

Without resorting to costly and time-consuming uses of “big data” and “artificial neural networks” several relatively simple methods can be used to corroborate and eventually calibrate an expert’s powers of observation. As Doug Hubbard suggests, applying the appropriate methodologies to very small samples can substantially reduce the sources of uncertainty.[iv] He advocates the “rule of five”, in which evaluating five random estimates of a distribution provides a confidence index of 93,5 percent that its median value is between the lowest and highest estimates.[v] In a similar vein, capture/recapture methods efficiently estimate the size of large populations based on hypergeometric distributions.[vi] Spot sampling provides another alternative by taking random snapshots of phenomena over time rather than tracking them constantly. Finally, clustered sampling produces surprisingly good results by simply identifying a random group of observations, and then sampling with the group.[vii]

The riddle presented at the start of this contribution is an application of the “knights and knaves” puzzle, we assume that knights can never lie and that knaves will never tell the truth. Since the statements are contradictory, in testing the hypotheses only the second visitor is true at heart. In real life, both the context and the solution is more complicated for the lines between fact and fiction are often blurred. Learning to use the data at hand to corroboate or collaborate each expert’s claim can help us reduce the effects of the experts’ own cognitive biases. Improving decision-making is at the heart of The Business Analytics Institute, our Summer School and Master Classes . Improving your own decision-making is only a click away.

Lee Schlenker is a Professor at ESC Pau, and a Principal in the Business Analytics Institute His LinkedIn profile can be viewed at You can follow us on Twitter at


[i] Adapted from Smullyan, R. (1978). What is the Name of this Book?. Prentice-Hall

[ii] Galef, J. (2016), Humans Are Great at Arguing but Bad at Reasoning. Heleo Conversations

[iii] Billing, J. (1874). Encyclopedia and Proverbial Philosophy of Wit and Humor

[iv] Bayes’ theorem will be developed in a separate post.

[v] Hubbard, D. (2014), How to Measure Anything: Finding the Value of “Intangibles” in Business

[vi] Ma, D., (2010), The capture/recapture method, A blog on probability and statistics

[vii] Stat Tek. (2017), What is cluster sampling?