Access the full text.
Sign up today, get DeepDyve free for 14 days.
R. Clemen, R. Winkler (1999)
Combining Probability Distributions From Experts in Risk AnalysisRisk Analysis, 19
R. Cooke, L. Goossens (2000)
Procedures Guide for Structural Expert Judgement in Accident Consequence ModellingRadiation Protection Dosimetry, 90
R. Clemen (2008)
Comment on Cooke's classical methodReliab. Eng. Syst. Saf., 93
R. Cooke, L. Goossens (2008)
TU Delft expert judgment data baseReliab. Eng. Syst. Saf., 93
R. Cooke, E. Jager (1998)
A Probabilistic Model for the Failure Frequency of Underground Gas PipelinesRisk Analysis, 18
M. Stone (1961)
The Opinion PoolAnnals of Mathematical Statistics, 32
V. Bier (2004)
Implications of the research on expert overconfidence and dependenceReliab. Eng. Syst. Saf., 85
R. Cooke (1991)
Experts in Uncertainty: Opinion and Subjective Probability in Science
Murray Hochberg, P. Hoel (1964)
Introduction to Mathematical Statistics.American Mathematical Monthly, 55
R. Cooke (2008)
Response to discussantsReliab. Eng. Syst. Saf., 93
Purpose – The purpose of this paper is to compare various linear opinion pooling models for aggregating probability judgments and to determine whether Cooke's performance weighting model can sift out better calibrated experts and produce better aggregated distribution. Design/methodology/approach – The leave‐one‐out cross‐validation technique is adopted to perform an out‐of‐sample comparison of Cooke's classical model, the equal weight linear pooling method, and the best expert approach. Findings – Both aggregation models significantly outperform the best expert approach, indicating the need for inputs from multiple experts. The performance score for Cooke's classical model drops considerably in out‐of‐sample analysis, indicating that Cooke's performance weight approach might have been slightly overrated before, and the performance weight aggregation method no longer dominantly outperforms the equal weight linear opinion pool. Research limitations/implications – The results show that using seed questions to sift out better calibrated experts may still be a feasible approach. However, because the superiority of Cooke's model as discussed in previous studies can no longer be claimed, whether the cost of extra efforts used in generating and evaluating seed questions is justifiable remains a question. Originality/value – Understanding the performance of various models for aggregating experts' probability judgments is critical for decision and risk analysis. Furthermore, the leave‐one‐out cross‐validation technique used in this study achieves more objective evaluations than previous studies.
Journal of Modelling in Management – Emerald Publishing
Published: Jan 1, 2009
Keywords: Modelling; Probability theory; Uncertainty management; Decision making
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.