Leveraging Large Language Models for Automated In-Depth Interviewing
Alexander Wuttke, Matthias Aßenmacher, Quirin Würschinger
Standardized surveys are the workhorse of public opinion research. While tremendously valuable in many regards, asking researcher-defined questions and letting respondents’ choose from a researcher-defined set of response options has significant drawbacks. In particular, this approach struggles to map belief systems. It is the social science manifestation of Schrödinger’s cat where the measurement itself creates what is to be measured. By asking respondents about a topic they never previously considered and providing them suggestions of suitable answers, standardized surveys run the risk of affecting their outcomes, particularly on topics where attitudes are weak or non-existing. Unstructured or semi-structured in-depths interviews that simply let the respondents talk mitigate these problems. But costs prohibit scaling. This project seeks to combine the large scale of standardized surveys with the depth of semi-structured interviews. We use large language models to act as an interviewer with real life respondents. We use a modular framework where, for each input of the respondent, multiple API calls are chained.
When Must We Limit Free Speech? Determinants of Canceling in Academia
Claudia Diehl, Matthias Revers, Richard Traunmüller, Nils B. Weidmann, Alexander Wuttke
We assess student support for restriction of free speech about controversial topics on university campuses in Germany. Using a vignette design developed in an adversarial collaboration, we analyze which aspects of a controversial statement lead to demands for its cancellation. We show that conservative statements are rejected more often than progressive ones. Moreover, only conservative statements, not progressive ones, generate more support for cancellation when they are framed as opinions rather than scientific findings, and when they are accompanied by political claims. Our study reveals a tendency to silence objectionable views on ideological grounds rather than to challenge them
Field Experiment Democratic Persuasion
Alexander Wuttke, Florian Foos
Ordinary citizens are considered bulwarks against democratic backsliding. Yet, citizens’ commitment to democracy below the surface is sometimes fragile and crises exacerbate existing anxieties and discontent. We propose “democratic persuasion” as a theory-driven, actionable intervention to foster the resilience of citizens’ commitment to liberal democracy. “Democratic persuasion” requires that political elites actively make the case for democracy and discuss democracy’s inherent trade-offs while engaging existing doubts and misperceptions. During the Covid-19 pandemic, which brought these trade-offs to the fore, we invited citizens on facebook to attend one of sixteen Zoom town halls to discuss pandemic politics with a German member of parliament. Each MP conducted two town halls and we randomly assigned, when they employed “democratic persuasion”. The field-experiment demonstrates substantial effects on some, but not all, indicators of democratic commitment, showcasing the academic and practical value of this emerging line of research on strengthening the societal foundations of liberal democracies.
How Many Replicators Does It Take…? Measuring Researcher Variability using a Crowdsourced Replication Experiment
Nate Breznau, Eike Rinke, Alexander Wuttke et al.
We expect that a population of researchers aiming to test the same hypothesis using the same statistical models and data will have variability in their results. This researcher variability is a potential threat to the reliability of any single study. Careful review or curation of researcher choices by both the researchers themselves and external observers should eliminate much of this variability. But what if it does not eliminate all. In other words, what if despite their best efforts researchers’ results are not reliable across researchers? To investigate this phenomenon we consider two types of variability: non-routine researcher variability such as mistakes or misunderstandings can be eliminated from the research process through careful review and curation, and routine researcher variability that likely passes through the research process undetected. The latter are undeliberate actions often taken within epistemological, idiosyncratic or institutional constraints that cause the variability. We offer a theoretical discussion and basic formal models of the uncertainty resulting from researcher variability. We then report results of an experiment testing variability by crowdsourcing researchers to conduct a replication with the simple goal of verifying an original study. Having the least possible decisions to make among the researchers gave us the greatest chance to observe and distinguish routine researcher variability – the form that potentially threatens the (meta-)reliability of replications if not research in general. In doing this experiment we are able to say something about how many replicators are necessary to achieve reliability. Moreover, we identify the importance of transparency and the features of the research process that are most likely to lead to variation in results.
How do Citizens react to AI in Political Campaigns?
Andreas Jungherr, Adrian Rauchfleisch, Alexander Wuttke