UConn Survey Research Alumni Explore Pre-Election Polling Methods & Accuracy

 

72nd Annual Conference of the American Association for Public Opinion and Research

May 17- 22, 2017, New Orleans, LA

In the wake of the 2016 Presidential election, survey practitioners and scholars from across the country search for an explanation as to how so many public opinion polls could have been so wrong in their characterization of the election; many consider the impact of evaluating, comparing and aggregating data from an increasingly wide variety of methodologies.  Polling accuracy has been an important research topic since the late 1940’s as the industry looked for ways to avoid another incident like the missed prediction of the 1948 election. The topic became prominent again in 1997 because of accusations by some commentators that the 1996 U.S. presidential election had been a disaster for pollsters exceeding the magnitude of 1948, and remained salient as many 2012 pre-election polls underestimated the Democratic vote share and much of the 2014 presidential election polling overestimated the Democratic vote.

Panel members Chase Harrison, Ph.D., Associate Director, Program on Survey Research at the Institute for Quantitative Social Science at Harvard University;  Rich Clark, Ph.D., Professor at Castleton University and Director of the Polling Institute;  Stephanie Marken, MA, Methodologist at Gallop and Lydia Saad, MA, Senior Editor at Gallop joined by Jennifer Dineen, Ph.D., Program Director, UConn Graduate Program in Survey Research, participated in a discussion on 2016 Pre-Election Polling: Methods and Accuracy in Context.  The panel extended the discussions on pre-election poll by exploring the relationships between survey mode, method of data collection, and sampling frame and the accuracy of the 2016 presidential election poll forecasts.  Their diverse set of papers explored the relationship between pre-election polling methods and accuracy across a variety of years, races, and electoral contexts.

UConn Alumni Contributions:

  • Simply Unpredictable: The Relationship between Methodology and Bias in Pre-Election Vote Share Estimates
  • Comparing 2016 Election Results from Traditional Phone Studies with Web-based Methodologies
  • Pre-Election Polling and Sampling Frame Decisions: a Case Study in Vermont

For additional information, visit http://www.aapor.org.