Show simple item record

dc.contributor.advisorDražen Prelec and Joshua B. Tenenbaum.en_US
dc.contributor.authorMcCoy, John Patricken_US
dc.contributor.otherMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences.en_US
dc.date.accessioned2019-03-01T19:52:48Z
dc.date.available2019-03-01T19:52:48Z
dc.date.copyright2018en_US
dc.date.issued2018en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/120624
dc.descriptionThesis: Ph. D., Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, 2018.en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 129-140).en_US
dc.description.abstractIn many situations, from economists predicting unemployment rates to chemists estimating fuel safety, individuals have differing opinions or predictions. We consider the wisdom-of-the-crowd problem of aggregating the judgments of multiple individuals on a single question, when no outside information about their competence is available. Many standard methods select the most popular answer, after correcting for variations in confidence. Using a formal model, we prove that any such method can fail even if based on perfect Bayesian estimates of individual confidence, or, more generally, on Bayesian posterior probabilities. Our model suggests a new method for aggregating opinions: select the answer that is more popular than people predict. We derive theoretical conditions under which this new method is guaranteed to work, and generalize it to questions with more than two possible answers. We conduct empirical tests in which respondents are asked for both their own answer to some question and their prediction about the distribution of answers given by other people, and show that our new method outperforms majority and confidence-weighted voting in a range of domains including geography and trivia questions, laypeople and professionals judging art prices, and dermatologists evaluating skin lesions. We develop and evaluate a probabilistic generative model for crowd wisdom, including applying it across questions to determine individual respondent expertise and comparing it to various Bayesian hierarchical models. We extend our new crowd wisdom method to operate on domains where the answer space is unknown in advance, by having respondents predict the most common answers given by others, and discuss performance on a cognitive reflection test as a case study of this extension.en_US
dc.description.statementofresponsibilityby John Patrick McCoy.en_US
dc.format.extent140 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectBrain and Cognitive Sciences.en_US
dc.titleExtracting more wisdom from the crowden_US
dc.typeThesisen_US
dc.description.degreePh. D.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Brain and Cognitive Sciences
dc.identifier.oclc1086610400en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record