Transformative Reading and Writing Synthetic Archives with Language Models
This paper reflects on Electronic Literature projects I created between 2017 and 2020 through interrogating how each project collaborates with an increasingly complex non-human component. Riffing off of Donna Haraway's concept of "significant otherness" and making kin, I speculate on the differences in the significance of the otherness that is engaged with in projects using methods based on combinatorics/chance, statistical models, and vector semantics (contemporary neural-network based language models like GPT-2). While recognizing that each approach involves a reduction in human agency, this reflective paper focuses on the increasing complexity to which this agency is relinquished and how to deal with presenting this relationship between human and non-human actors. Culminating in a series of projects using OpenAI's GPT-2, the need for a self-reflexive "transformative reading interface" is introduced as a concrete instantiation of Katherine Hayles' concept of a "technotext." A transformative reading interface links a corpus of text to text generated by a language model based on that corpus. Such an interface serves to provide a source of noisy creativity for writing and a way to explore the materiality of contemporary language models for reading, while interrogating and respecting the posthuman nature of these artifacts.