Critical Code Studies and the electronic book review: An Introduction

Critical Code Studies and the electronic book review: An Introduction

Mark C. Marino

Mark C. Marino explains the rationale for the Critical Code Studies Working Group, a six-week experiment in using social media for collaborative academic production. Marino also analyzes the first week’s discussion, which focused on debates about what it means to read “code as text.”

Ryan Brooks:

This essay is a general introduction to a series on Critical Code Studies distilled from a six week online discussion. As each week is published on ebr, it will be indexed here.

Week 1:



Week 2:



Week 3:



Ryan Brooks:

In “Interferences: [Net.Writing] and the Practice of Codework,” Rita Raley analyzes the poetics of Mez’s “neologistic net.wurked language…
m[ez]ang.elle,” which incorporates “made-up code language as a mode of artistic composition and everyday communication.”


“Critical Code Studies starts here.”

That was the tagline of the Critical Code Studies Working Group (CCSWG), a gathering of over 100 scholars from countries across the globe for an applied experiment in field formation. The Working Group met over the course of six weeks, beginning February 2010, to engage the work of Critical Code Studies.

As we defined it in the early days of the CCS blog, Critical Code Studies is the application of hermeneutics to the interpretation of the extra-functional significance of computer source code. It is a study that follows the developments of Software Studies and Platform Studies into the layer of the code. In their oft-taught text, Structure and Interpretation of Computer Programs, Herald Abelson, Gerald Jay Sussman, and Julie Sussman declare, “Underlying our approach to this subject is our conviction that ‘computer science’ is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think” (xvi). Computer science in this context is the study of procedural epistemology, as they put it. While books like Noah Wardrip-Fruin’s Expressive Processing and Ian Bogost’s Unit Operations have begun to outline ways for discussing those processes, Critical Code Studies looks to examine more closely the particular symbols used to express those procedural ideas.

The electronic book review published the initial essay on CCS four years before the 2010 CCSWG. After convening the blog, I began collaborating on methodologies and approaches, and participated in discussions of these at the Society for Literature, Science, and the Arts (SLSA), Digital Humanities, the Modern Language Association, the Electronic Literature Organization, and elsewhere. These productive conversations made clear that it was not enough to simply assert that code can be read. It was time to establish and demonstrate how to interpret code productively. CCS was in need of a talented group of scholars working together in intense collaborative sessions to develop approaches based on their interests and building on their experience. We had found the ocean and now had to fashion our craft, or perhaps even a manner of sailing. What would these new methodologies be? How would they be distinct from or connected to other technocultural interpretive projects? The CCSWG arose out of these questions.

The working group represented the first formalized meeting to develop methodologies specifically for CCS. However, I expect that this working group would have been far less productive without the electronic book review. For in offering to publish the discussion threads of the working group, ebr transformed the heated discussions and creative exchanges from the late night banter of a sequestered research group into a reviewed and edited scholarly publication, bringing the threads (or ripostes) to the eyes of many others. The working group could then proceed in its Ning-island retreat, trusting that the best fruits of its frenzy could be brought back to the mainland. And that has led here.

The CCSWG was a six-week long experiment in alternative academic production. It was originally instigated and organized by founding member Max Feinstein. Under the guidance of David Parry, the initial proposal for an invited discussion group evolved into an open call for an online seminar. To add structure and sweetness, each week featured a different invited guest speakers hosting a week-long discussion thread on a different theme.

As the organizing chair, I led off with the thread that follows. The second week thread was hosted by Jeremy Douglass, who has been involved in CCS from its origins on our collaborative blog Writer Response Theory. In his presentation, Douglass shifted the discussion towards a broader world of critical code studies “in the wild,” calling on the working group for a “code scavenger hunt” to expand our bestiary of code criticism. The third week took an innovative form, featuring Dennis Jerz, whose work on William Crowther’s ADVENTURE serves as a model for CCS projects. To build on his work, we published the complete code of Crowther’s program in a group-editable text document. This project of collaboratively annotating a piece of code for the purpose of explication and exegesis was a first of its kind, and it, too, will serve as a model for the ways in which scholars will read code together. In the fourth week Wendy Hui Kyong Chun presented the first chapter of her forthcoming: Programmed Visions: Software and Memory, a selection which challenged the fetishization of source code, or “sourcery” as she calls it. In the fifth week, Stephen Ramsay displayed virtuoso video editing in a “live” reading of live coding, one that demonstrates the process of developing a CCS reading, in the spirit of his object of study, on the fly. The sixth and final week brought Mez Breeze, creator of the creole language mezangelle. Her text for the week was a codework she had created for the occasion.

In addition to the weekly discussions, members began individual “Code Critiques,” threads dedicated to the analysis of a single coding object, amassing thirty-one in all. One of those, “Random Mazes,” submitted by Nick Montfort, is now on course to become a collaboratively authored book, written by a twelve person collective, with MIT Press. The collective’s other members include Patsy Baudoin, John Bell, Ian Bogost, Jeremy Douglass, Mary Flanagan, Michael Mateas, Casey Reas, Warren Sack, Mark Sample, and Noah Vawter. The book takes as its working title its single-line object of study: 10 PRINT CHR$(205.5+RND(1)); : GOTO 10, a line that when composed would create a pseudo-random maze pattern scrolling down the screen of a Commodore 64. That program was also the subject of my recent essay in the new journal by Loss Pequeño Glazier, Emerging Language Practices. The generativity of that tiny fragment of code points to the vast potential for explication across the archives of source code. No doubt, other code critiques from those sessions will appear in essays and conference presentations, as the ones appearing in the CCS conference at the University of Southern California held in July and the upcoming panel at MLA 2011 entitled “Close Reading the Digital,” which features several code-based critiques.

On the First Week

When I developed the presentation on the first week, I truly believed CCS had already arrived. That is to say, I was under the impression that the community of new media scholars had accepted that the application of hermeneutics to the study of code was a given and, thus, we could call a kind of “open season” on code. The success of conference presentations, the number of people joining the working group, and the positive feedback from critics and editors had led me to believe that the working group could begin its work without debate or at least that members would debate competing interpretations rather than the risks, threats, and dangers posed by interpretation itself. I imagined the equivalent of English literary seminars or Communication symposiums, fueled by coffee and Continental critical theory, conjecturing on the meaning of lines of code as we had once debated the meaning of Ulysses, the Labyrinth of Knossos, PacMan, Flight Paths, cellular automata, and the Bonaventure Hotel. A brief examination of the opening remarks of my video should convey as much. There is a bit of a light touch on examples, the tone a bit heady, the vision through glasses tinted Internet-blue. However, within the first few exchanges it becomes clear that the essential methodologies had yet to be negotiated and formalized, and that if another round of science wars was to be averted, anything-goes was a goner.


One of the strongest reactions appears in the ambivalence or bipolar reaction of Gabriel Menotti Gonring who writes, “I wasn’t expecting to see something so close to a critical reading applied to code, but the results look promising!” But a moment later, he offers to play the “devil’s advocate and wonder if this methodology cannot lead to the old problem of mistaking code with text, and treating it as a system of metaphors to be interpreted / discourses to be unveiled.” Barbara Hui echoes this critique, writing she is “most ambivalent about [….] the move to read lines of code as text, i.e. computer code as a sort of poetry or metaphor.” José Carlos Silvestre picks up her critique, “borrowing from [Claus] Pias,” to calling the practice “hermeneutics forgetful-of-technics” or “technik-vergessene Hermeneutik.” Finally, Federica Fabretti raises this specter of the potential “violence of CCS” in interpreting code as though it were any other kind of semiotic object.

Returning again was what I call the “programmer’s objection,” in which those who have more experience or even make a living programming or teaching programming worry about making “too much” of particular lines of code. Is there a violence in interpreting code metaphorically? Hui identifies the source of this anxiety most presciently when she writes, “Perhaps programmers acquire a different/additional kind of literacy that is very difficult to ignore once you become fluent in it.” Quickly lines were drawn in the discussion between those who seemed to be abstracting out code from its functional significance and those who wanted to pay closer attention to the material and formal effects the code would have when processed. Critics were cast or self-identified either as the interpreters ready to make merry or the programmers forced to defend their ground. Such a division, real or merely staged, reminded me of my experience studying Ancient Greek and Spanish and how my early lessons lead me to notice cultural difference while I was learning vocabulary while later lessons had me concentrating on getting the declensions and subjuntivo right. “Getting it right” is a mode in which the communicator focuses on functionality, on syntactic accuracy, rather than reflecting on ramifications and implications. In this state, a language learner wants to be processed accurately by the system. I would hardly argue that the person who wants to be fluent will be less capable of finding meaning, but focusing on achieving legibility within the system involves a different use of attention than conjecturing about significance.

Equally, I am reminded of Peter J. Bentley who discusses his own programmer’s perspective in “The Meaning of Code.” He writes,

When you’ve been programming for long enough, when you’ve grown up programming computers, you think in a subtly different way…. You become used to breaking down problems into smaller, easier parts….But code can also dehumanize a person. There is no subtlety, no humor, no scope for emotion in code….Code is so literal, so unambiguous, that it takes a while to train a mind to think in the same way. These limitations of code can produce side-effects in people that write it - a joke is lost…an ambiguity the cause of excessive confusion. (33)

Bentley is perhaps too self-effacing when including this last assessment of a programmer’s sense of meaning. Certainly, he is not calling programmers humorless. The co-evolution of the Internet and techy humor sites, such as Penny Arcade, quickly refutes that reading. However, there may be some value to asking whether working so closely with unambiguous code points one away from pursuing potentially ambiguous connotations and meanings. As Alan J. Perlin has written in his forward to Abelson and Sussman, “The computer is a harsh taskmaster. Its programs must be correct and what we wish to say must be said accurately in every detail” (x). Certainly, CCS must acknowledge the unambiguous denotations of computer source code before it can begin to approach the connotations.

According to the discussants, code readings require an investigation of the material context of the code before moving into more symbolic interpretations. That context includes the historical background of the code, the manner in which the code operates, the style and paradigm in which it was written, the history and culture of the language used, and how it interacts with other software including the operating system. Furthermore, the investigation of the operation of the code should pursue the run-time effects of the particular lines of the code and any distinctions in the processing of that code on different hardware. These technical and historical specifications outline perhaps just the most basic elements necessary to build toward interpretation.

Interpretation is a humanistic activity that is easy to attack and even easier to undervalue in a field of study so fully beholden to progress narratives and the tyranny of Moore’s Law. Interpretation is non-falsifiable, subjective, and fuzzy. It is not an activity with sex appeal to granting bodies or the Department of Defense. Other scholarly endeavors to celebrate the history of code, to study the sociology of those who coded, to document the process of discovery, are far easier to present as “valuable” across the disciplines. As the conversation unfolded in the CCSWG, the nature of the interventions became clear. Self-identified programmers such as Hui and Silvestre were not arguing that code could not be interpreted symbolically, as they assured us repeatedly. However, in any attempt to distinguish CCS from other areas, such as Software Studies, they wanted to know what could be gained by looking at the code in an analysis more fully contextualized within the functioning of the program, its syntactical and logical conventions, the other programs interacting with it, and the processes produced. In short, within the “programmer’s objection,” was a call for a much more rigorous approach to code prior to the subsequent interpretive moves.

Alternative Readings

Like any worthwhile seminar-style discussion, Week 1 evoked/provoked/produced many ways of interpreting this code:

  • Micha Cárdenas offered a method of reading a line closely by loosely-but-closely translating the source code into a kind of expressive pseudo-code.
  • Jonathan Cohn found the annual “dynabyte” ad to be a hook into a nationalistic reading of the worm.
  • Evan Buswell argued we should be reading MS Outlook and the Microsoft Operating System, since these programs are the source of the vulnerability, pointing to the way the worm takes advantage of the “SpecialFolder” of the OS.
  • Hugh Cayless saw the code as “junk” and proposed reflecting on how such a distinction could be drawn.
  • Marisa wondered if the mouse click should be considered part of the code.
  • Jennifer Lieberman suggested reading the code by contrasting the syntagmatic with the paradigmatic (“the other possibilities or ideas represented by what is not on the page/screen”).


This week demonstrated the desire for scholars to maintain a rigorous attention to material specifications of the code, to read the code fully informed by how it might resonate within cultures of coding. It also led me to clarify my statements on reading “code as text,” as framed in the ebr article. Intended as a direct response to John Cayley’s claim in “The Code is Not the Text - unless it is the Text”, my phrasing - reading “code as text” - actually confused the issue. Code for CCS is not text in the sense of a poem, a collection of signs, standing alone. Code is the text in the sense of Cultural Studies, the object of study within its material, historical context. For a thoughtful reflection of text and context in Cultural Studies, see: Kovala, Urpo. “Cultural Studies and Cultural Text Analysis.” CLCWeb: Comparative Literature and Culture 4.4 (2002): While at times such analysis leads to interpretive moves often applied to text, code should not be reduced to its symbolic representation. As the week’s conversation clarified, code is a collection of un-processed signs that are one component of much larger systems that include the operating system, compiler, hardware, et cetera. Wendy Chun’s week pursues this topic further as she interrogates the perils of fetishizing code. As Federica writes, while she shares “Mark’s emphasis on always going back to code” and “his puzzlement at/impatience for ‘general’ talk on code which just uses code as illustrative/decorative,” she feels “so utterly limited by reading ‘just’ code (and by posting de-contextualized pieces of code).” In other words, Critical Code Studies should not be confined to the study of merely the set of symbols in the code. They are the trail markers of a much more extensive analysis.

Secondly, another word from the original ebr article became a sticking point: extra-functional. On the one hand, Barbara Hui and others expressed concerns about moving away from the functional, of leaving functionality behind. On the other hand, when readings seemed to focus too much on the overall processes and too little on the effects of the code on the machine in run-time, that seemed more like Software Studies. I would like to clarify that extra-functional does not mean detached from function but instead growing-out of and beyond the functional effects. As many stated here, the closest attention must be to the effects of the code, how the symbols are interpreted by the machine; however, that functionality is one part of the larger ways in which code becomes meaningful to its readers.

This tension continued to play out through the following weeks of the working group, and I would argue that focusing on the code is necessary while we are developing specific approaches that prove most productive when discussing it. In any given work of new media criticism, code critiques are likely to be just one facet, but for now CCS needs to attend to the work of formalizing some procedures for analyzing code. This first week establishes several of the key sites on which those processes will be developed.

Works Cited

Abelson, Harold and Gerald J. Sussman with Julie Sussman. Structure and Interpretation of Computer Programs. Cambridge, Massachusetts: MIT, 1985.

Bentley, Peter J. “The Meaning of Code.” Ars Electronica 2003: Code:The Language of our Time. Hatje Cantz Publishers, 2003.

Bogost, Ian. Unit Operations: An Approach to Videogame Criticism. The MIT Press, 2008. Print.

Marino, Mark C. “Critical Code Studies - Mark C. Marino.” electronic book review electropoetics (2006): n. pag. 25 Aug. 2010.

Perlis, Alan J. Foreward. Structure and Interpretation of Computer Programs.

Wardrip-Fruin, Noah. Expressive Processing: Digital Fictions, Computer Games, and Software Studies. The MIT Press, 2009.