Humanists have relied on computer science since the 1990s, when we started using search algorithms to sort sources by relevance to our hypotheses. If we don’t often reflect on the implications of this practice, it’s perhaps because we frame algorithms as “tools” — a metaphor that minimizes their intellectual content. In order to reflect on the interpretive significance of algorithms, we may need to converse with disciplines that understand them as meaningful models or principled learning strategies. We may need computer science, in other words, not as a bag of tricks but as a theoretical interlocutor. What might that conversation look like? I’ll flesh out some possibilities, briefly describing collaborative research on characters in 19c fiction with David Bamman (CS, Carnegie Mellon). I’ll also acknowledge some of the barriers that make this conversation risky.
Ted Underwood is Associate Professor of English at the University of Illinois, Urbana-Champaign, and the author of two books on eighteenth- and nineteenth-century literary history, including Why Literary Periods Mattered (Stanford, 2013). He is currently developing models of genre in eighteenth- and nineteenth-century books, supported by a Digital Humanities Start-Up Grant from the NEH and an ACLS Digital Innovation Fellowship. A collaborative essay with Andrew Goldstone, topic-modeling the history of literary scholarship, is forthcoming in New Literary History.