At Rose & Associate’s Risk Coordinators’ workshop at Tullow Oil’s offices in Chiswick, west London at the end of January, Graeme presented his work on how to identify and quantify systematic bias in probabilistic predictions, and on how much it costs to be rubbish at risking (spoiler alert: a lot). The full presentation can be viewed here. The highlights are sketched out below.

  • We can not conclude anything about a probabilistic assessment from a single outcome because the conclusion must be just as uncertain as the outcome.
  • To reduce the uncertainty, we aggregate sequences of outcomes, for example, predicting the number of successes in a sequence of success / failure events (like drilling exploration wells), or predicting the total volume discovered in a sequence of oil discoveries
  • In the performing this aggregation, much of the detail of the individual probability distributions is lost. Only the mean and variance of the constituent distributions survive the convolution.
  • (Aside) This is OK, because the mean and variance are the most important parameters for portfolio optimization (with some reservations, notably covariance)
  • The consequence of the effects of aggregation is that we can only assess the aggregated mean and variance of our constituent distributions. We can not hope to find random errors in the evaluation of single events or to audit more detail of constituent distributions than their mean and variance. We can only discover systematic biases that effect the mean and variance of distributions
  • There are four kinds of bias
    • Mean optimism (over-predicting probability of success, over-predicting means of volume distributions)
    • Mean pessimism (under-predicting probability of success, under-predicting means of volume distributions)
    • Variance confidence (under-predicting variance – essentially predicting less variation between means and outcomes than is observed. In probabilities, this corresponds to polarizing probabilities towards certain failure and certain success and away from high variance mid-range values, in volumes, this corresponds to having too narrow ranges on volume distributions).
    • Variance vagueness (over-predicting variance – essentially predicting more variation between means and outcomes than is observed. In probabilities, this corresponds to pulling probabilities back to towards high variance mid-range base rates, in volumes, this corresponds to having too wide ranges on volume distributions).
  • By constructing a model of these biases using Bayes theorem to provide spurious information corresponding to each bias, we can mimic the effects of these biases on predictions.
  • By comparing biassed predictions with outcomes sampled from unbiassed probabilities we can
    • Learn how to detect, identify and quantify biases
    • Predict the effect of biases on the decisions by which we create and destroy value

Leave a Reply

Your e-mail address will not be published. Required fields are marked *