BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//jEvents 2.0 for Joomla//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
UID:064f3220b0c355310ba1ab39c9025ee8
CATEGORIES:Seminars
CREATED:20250108T112632
SUMMARY:Stefano della Vigna - University of California, Berkeley
DESCRIPTION;ENCODING=QUOTED-PRINTABLE:<p><em><strong>Forecasting Social Science: Evidence from 100 Projects</stro
 ng></em></p><p>Abstract:</p><p style="text-align: justify;">Increasingly, r
 esearchers gather forecasts ex ante about the results of their research stu
 dies. Ex post, this allows for a comparison of these forecasts with the res
 ults to capture the direction and extent of learning and to counter the “we
  knew it already” audience response. But what do we know about the accuracy
  of these forecasts? We use a unique data set from the Social Science Predi
 ction Platform for all 100 of the projects posted in the 2020-24 period, in
 cluding results for 60% of the projects. This unique data set contains deta
 iled information on the projects and the forecasters, including tracking fo
 recasts across projects. Using this data set, we examine 10 questions about
  these forecasts: (1) the average predicted effect size: (2) the comparison
  to realized effect sizes; (3) the predictability of effect sizes with aver
 age forecasts. Turning to the accuracy of individual forecasts, we compare 
 (4) academics to non-academics, (5) experts in a field to non-experts, (6) 
 panelists to other forecasters, (7) we estimate the extent of learning over
  time and (8) the impact of confidence on accuracy. Finally, we consider wh
 ether (9) there are superforecasters, with consistently higher accuracy and
  (10) we estimate the extent of wisdom of crowds. We draw the implications 
 of the findings for forecasting and updating in science.</p>
DTSTAMP:20260517T104857Z
DTSTART:20250505T163000Z
DTEND:20250505T180000Z
SEQUENCE:0
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR