Please enable JavaScript in your browser.

On Literary AI

The Literary History of Artificial Intelligence is a collaboration between the Columbia English Department, the Columbia University Rare Books & Manuscript Library, and Columbia University’s Digital Scholarship department.

This exhibition explores the long, shared history of literature and computation through the Columbia Library’s holdings. We present texts that participate in early debates about whether writing was a skill to be honed or a gift to be appreciated and whether the skills of writing could be learned and even made into repeatable algorithms. Is writing like any other craft that can be learned and taught? Following a timeline from circa 1890–1970, this exhibition explores professional manuals, devices, and techniques that promised to make writing easier—and even to automate it. The Literary History of AI showcases examples of algorithmic composition, such as prose and poetry written by machines, alongside literature written with the aid of algorithmic and combinatorial devices. This exhibition broadly tracks two broad stories related to the literary history of AI: production and analysis.

AI & Literary Production: Romance, Mystery, and Defense Aeronautics

The first story follows experiments in the genesis of literature as a professional and technical craft, facilitated by writing manuals, guides for churning out mystery or romance plots, such as Wycliffe Hill’s Plot Genies, and plot generators such as William Wallace Cook’s Plotto series. Automated text generation catalyzed a wave of early 20th-century patents, from Arthur Blanchard’s “movie-writer” machine and Henrietta Rose Montague’s “name selector.” Some how-to manuals incorporated simple plot generators, with spinning devices or automatic number generators that would guide would-be novelists past writers’ block. We follow these professional how-to manuals into mid-twentieth century computer experimentation in algorithmic and automated writing, from Christopher Strachey, who wrote a love letter program in the 1940s, to Margaret Masterman, a linguist who pioneered the use of computers to aid automatic translation. The mid 20th-century saw an existential question arise as to whether programs should be written based on frames or grammar, a divide that drew much of its vocabulary from comparing literary genres to linguistic rules. Were situational specifics or universal linguistic structures the better way to teach text-generating computers to write? This debate would influence early experiments with chat bots, baseball analytics, defense training programs, and match-making algorithms.

The Literary History of AI showcases how early 20th-century popular texts for aspiring screenplay writers, romance novelists, and pulp fictioneers influence and shape contemporary, machine learning-based tools used in automating aviation accident reports and defense logistics.

AI & Literary Analysis: Character types, story pieces, chat bots, and medical care

The second story tracks the use of algorithms to analyze literature, highlighting works such as Columbia alumna Harriot Fansler’s dissection of Types of Prose Narratives, which she called a “textbook for the story writer” to Georges Polti’s Les trente-six situations dramatiques (1895), which categorized what he theorized were the 36 timeless situations that structured human experience, from vengeance and retribution to love and sacrifice. This section of the exhibition includes analyses and compendia of folk tales, diagrams of plot structures, including Harry Stephen Keeler’s “web-word” designs, and Carolyn Well’s distillation of the mystery novel.

The interest in narrative analysis and plot visualization also catalyzed the development of new techniques for organizing story elements. Databases, scrapbooks, and notecards forecast zettelkasten, Evernote, and Google Docs. Information storage was increasingly understood to be central both to analysis and to composition. These materials illustrate how early twentieth-century techniques used to generate prose were paralleled by advances in natural language processing, which today animate narratively “intelligent” bots such as Apple’s Siri and Amazon’s Alexa, as well as influence interactive and automated forms of medical diagnostics.