Marli Dunietz
  • Welcome
  • About
  • Research
  • CV
Categories
All (5)
asymmetric information (1)
bipartisanship (1)
communication (1)
electoral behavior (1)
experimental economics (1)
laboratory experiment (1)
media trust (1)
methodology (2)
partisan norms (1)
perceived polarization (1)
political cognition (1)
political discussion (1)
self-censorship (1)
social desirability bias (1)
strategic voting (1)
survey experiment (1)

Research

 

Voting in the Dark

electoral behavior
strategic voting
asymmetric information
media trust

How do beliefs about media partisanship influence how uninformed voters vote? I present theory and experimental evidence showing that beliefs about the political alignment of news media and its audience shapes whether uninformed voters decide to abstain or guess in down-ballot elections. I find that uninformed voters strategically guess rather than abstaining when they believe informed elites have misaligned preferences. Across three studies, the strategic behavior is more consistent among Trump voters than among Harris voters, resulting in a down-ballot advantage.

đź“„ View Working Paper (PDF)

 

Casual Models of Partisan Minds

laboratory experiment
political cognition
methodology

Do Americans see partisanship as the source of other citizens’ policy preferences, or do they believe policy preferences determine what party others choose? These two worldviews induce divergent responses to information about others’ issue concerns: if party is a cause, then different issue concerns are correlated outcomes, but if it is a consequence, then the different issues are substitute explanations. Using a laboratory experiment, I find most individuals consistently interpret others’ actions according to one of these narratives. However, as they learn about others’ lack of ideological constraint, they increasingly attribute others’ actions to issue-motivations.

đź“„ View Working Paper (PDF)

September 28, 2024
 

Deliberative Discourse and Information Transmission Under Social Pressure

political discussion
communication
partisan norms
self-censorship
social desirability bias

Social rewards and sanctions influence what political opinions people express or withhold. Therefore, they may distort perceptions of public opinion if observers fail to account for these incentives. In two online discussion experiments, we assess how well observers (Decoders) can infer the distribution of private opinions among potential speakers (Encoders). Contrary to expectations, we find that Decoders are most accurate when our social incentives are strongest. Decoders are relatively capable of recognizing and adjusting for social desirability bias and self-censorship, compared to other sources of distortion, such as reluctance to share unoriginal opinions, which arise when we minimize peer incentives.

September 15, 2024
with Valeria Burdea
 

Consensus is a Cue for Quality

survey experiment
perceived polarization
bipartisanship

Negative partisanship theories suggest that when parties in Congress frequently disagree on policy, the proposing party’s base will view out-party support as evidence of a compromise in values, while models of learning from biased information suggest that unanticipated out-party support conveys high policy quality. We test the predictions of the two models using experiments embedded in representative sample surveys and find support for consensus as a cue for quality.

đź“„ View Working Paper (PDF)

August 18, 2024
with Matthew Tarpey
 

The slider task: an example of restricted inference on incentive effects

experimental economics
methodology

Real-effort experiments are frequently used when examining a response to incentives. For a real-effort task to be well suited for such an exercise its measurable output must be sufficiently elastic over the incentives considered. The popular slider task in Gill and Prowse (Am Econ Rev 102(1):469–503, 2012) has been characterized as satisfying this requirement, and the task is increasingly used to investigate the response to incentives. However, a between-subject examination of the slider task’s response to incentives has not been conducted. We provide such an examination with three different piece-rate incentives: half a cent, two cents, and eight cents per slider completed. We find only a small increase in performance: despite a 1500% increase in the incentives, output only increases by 5%. With such an inelastic response we caution that for typical experimental sample sizes and incentives the slider task is unlikely to demonstrate a meaningful and statistically significant performance response.

đź”— Journal of the Economic Science Association (2016)

May 14, 2016
with Felipe Araujo and others
No matching items