
AI and Close Reading
Does AI Help Us Derive Pleasure from Interpreting Culture
— or Just Perform Better?
What Does AI Do for Cultural Interpretation?
A Randomized Experiment on Close Reading Poems with Exposure to AI Interpretation
AI demonstrates unprecedented reasoning capabilities, but its increasing integration into human reasoning via automated reading and summarization has provoked debate about its use for cultural interpretation. Close reading — the practice of understanding, analyzing, and critiquing cultural texts for pleasure — is a skill at the core of such interpretation, traditionally being seen as exclusive to humans. To test AI's impact on close reading, both in terms of interpretative performance and pleasure, we conducted a preregistered randomized experiment (n=400) investigating the impact of AI assistance on close reading, as compared with no AI assistance. This assistance came in the form of a single AI-generated interpretation or multiple AI interpretations. We found that a single AI interpretation boosted both performance and pleasure, while multiple AI interpretations only improved performance. These effects, notably, varied depending on the level of experience one brought to the task. Further exploration revealed a trade-off: participants who heavily relied on AI showed better performance on the task but lower pleasure. Our results contribute to discussion on whether and how to calibrate AI assistance for cultural interpretation and close reading, and how to do this in ways that recognize different kinds of readers.
Can AI Help Us Interpret Culture?
Reading poems, watching movies, listening to songs — we interpret culture as a way of everyday entertainment, and these activities demand focused and complex forms of interpretive attention. The skill at the core of such interpretation is close reading: the ability to understand, interpret, and critique cultural works (Abrams et al.; Long and So) in textual form like poems and novels, or in other media like songs and films. Close reading is widely taught in humanities education, and people also practice it informally every day as a social skill — when a lyric hits differently, when a film scene lingers, when you argue with a friend about what a show was really about. It is a marker of social engagement and cultural awareness (Carbaugh; Geertz).
Now AI can do this too. LLMs are capable of interpretive reasoning, generating literary interpretations that are detailed and sophisticated. The possibility that AI might automate close reading has provoked fierce debate among writers, creators, journalists, humanities professors, and everyday consumers of culture (Naquin; Watkins). Many fear that AI will corrupt or diminish what has until now been an exclusively human skill. While few doubt that AI is useful for instrumental tasks like coding, there is much more hesitation and anxiety around cultural interpretation, which is seen as a quintessentially human activity. If the point of reading a poem is the pleasure it brings — the personal meaning-making, the feeling of discovery, the satisfaction of figuring it out — then what is to be gained by having AI do it for you, or even just alongside you?
AI is already being integrated into how people consume culture text online. Poetry platforms like All Poetry, for instance, have started showing AI-generated interpretations alongside poems. The question is not just whether AI will be part of cultural interpretation, but whether AI assistance can support this activity in a beneficial way. To find out, we ran a randomized experiment examining how differing amounts of AI assistance influence both interpretive performance and the pleasure derived from close reading.
What We Tested
We ran a randomized experiment with 400 crowdworkers as lay readers, each randomly assigned to one of three conditions. In the Control condition (left in the figure below), no AI assistance was provided. In the AI-Single condition (center in the figure below), one AI-generated interpretation was shown alongside the poem. In the AI-Multiple condition (right in the figure below), three AI-generated interpretations were provided, stacked on top of each other, with one visible by default and two more signaled by buttons.

Each participant read three poems in random order: “Love Poem” by Linda Pastan, “Dusting” by Marilyn Nelson, and “Theme for English B” by Langston Hughes. For each poem, they completed interpretation tasks adapted from the Critical Reader’s Interpretive Toolkit, which involved identifying stylistic features and explaining their effects, as the first necessary step and foundation for effective close reading.
We measured two elements:
Interpretive Performance — Feature Identification, Interpretation Quality, and Writing Quality.
Subjective Experience — building on close reading scholarship (Abrams et al.; Bialostosky; Guillory) and intrinsic rewards theory (Csikszentmihalyi), we conceptualize the pleasure of close reading as arising from discovering personally resonant meanings, enjoying the interpretive puzzle-solving process, and feeling empowered to make sense of complex texts. We capture these three sources of pleasure as three interrelated subjective experience constructs: Appreciation, Enjoyment, and Self-Efficacy.
See the paper for more details.
Summary of What We Found
Here we summarize our key findings. See the paper for more details.
Interpretive Performance
AI assistance consistently improved participants’ close reading performance across all Interpretive Performance measures, including Feature Identification, Interpretation Quality, and Writing Quality. Specifically, AI assistance in the form of a single AI interpretation showed larger effect sizes than multiple interpretations.
Subjective Experience
The picture is more nuanced for the pleasure derived from close reading. A single AI interpretation improved participants' appreciation, enjoyment, and self-efficacy, while multiple AI interpretations showed no benefits.
Furthermore, when considering relative expertise (operationalized as experience of college-level humanities coursework), AI assistance in the form of a single AI interpretation enhanced Pleasure only for inexperienced readers, while experienced readers showed no such benefits. Multiple AI interpretations even reduced experienced readers’ self-efficacy.
How Participants Engaged with AI
We further explored how participants made use of their assigned AI assistance through behavioral logs. We found that a considerable proportion of those exposed to multiple AI interpretations did not view all of them (42.1% viewed one and 12.3% viewed two out of three). In exploratory analysis, adjusting for the number of interpretations viewed, we found that the mere presence of multiple AI interpretations may have negative effects on participants’ pleasure derived from close reading. We also examined copying behavior and textual overlap between participants' responses and the AI interpretations they were exposed to, revealing a performance-pleasure trade-off: those who heavily incorporated AI into their responses achieved high scores closer to the AI benchmark but consistently reported lower pleasure.
These observations suggest that readers may naturally resist letting AI fully take over their interpretive work. Modest exposure to AI assistance can benefit both performance and pleasure, while too much AI assistance can diminish these pleasurable benefits.
BibTeX
© 2026 by Jiayin Zhi.