Critical Code Studies Conference- Week Five Discussion

Critical Code Studies Conference- Week Five Discussion

David Shepard

David Shepard heads off the discussion regarding Stephen Ramsay’s live reading of Andrew Sorensen’s “Strange Places.” His initial contribution is followed with posts by Amanda French, Mark Marino, Max Feinstein, Jeremy Douglass, Daren Chapin, John Bell, Jeff Nyoff, Jennifer Lieberman, and Stephen Ramsay, as well as Andrew Sorensen himself.

In week five, Stephen Ramsay performed a live reading of a livecoding performance: in a video, he presented a spontaneous commentary over a screencast of Andrew Sorensen’s “Strange Places,” a piece Ramsay had never seen before. The screencast showed Sorensen using Impromptu, a LISP-based environment for musical performance that he had himself developed, to improvise a piece of music; Sorenson developed the piece’s musical themes by composing and editing code. The video allowed the audience to watch Sorenson write and edit his code in the Impromptu editor window. This presentation inspired a discussion that broke livecoding down into two overlapping issues: is it “live,” and is it “coding”?

The first half of “livecoding”- liveness - makes coding interesting by making the programmer and his or her process apparently accessible, compared to the anonymous distribution of most software via download or disc after thorough design and bug testing. The TOPLAP Manifesto states, “Give us access to the performer’s mind … the whole human instrument.” We never meet the programmers who develop the majority of the software we use, let alone see them at their desks typing; a programmer on a stage seems comparatively accessible, as is (presumably) his or her intent and mistakes.

But what does watching a programmer show? Though the discussion yielded quite a bit of access to the performer’s mind, as Sorensen himself joined in, most of the participants agreed with Sorenson’s own statement that he does not believe livecoding yields such insight, and focused on the performative quality of the coding instead. John Bell described the “miraculous” feeling of watching Sorensen’s act of coding, contrasting it with his own experience of the labor involved in building each component of a program. Livecoding is “live” by virtue of putting a programmer before an audience, who is engaged in a composition that is improvised.

What Bell’s miraculous feeling (and livecoding’s supposed spontaneity) also highlight is how livecoding requires complex layers of abstraction that simplify much of the development process: Sorenson’s custom-built programming environment, the Apple AudioUnits library, and preset routines such as the function “cosr” that Ramsay found opaque (“a macro that provides a wrapper for the standard cos function with some scaling and time stuff built-in,” according to Sorenson). Livecoding - done in minutes on a screen - differs from the analytical, iterative development process of most professional programming, relying heavily on prewritten code libraries to reduce the complexity of the coding process enough to perform before an audience. Livecoding (usually) involves no debugging by virtue of the fact that it depends on well-defined environments and well-tested libraries of routines, not to mention precise typing and a forgiving environment; attempting to compile Impromptu code with syntax errors results in silent errors that appear in a portion of the window hidden from view in Sorenson’s video.

The necessity of these foundations inspired Jeremy Douglass to ask, “At what point do we exit liveness? When we draw on scripted elements or libraries of things that previously ‘worked’?” Daren Chapin responded that “This kind of heuristic simplification is one of the primary goals of programming … Libraries, extensible classes, polymorphism, macros, code generators … all of these mechanisms aid us in transforming what once seemed like complex maps between domains into ones that are easy for us to reason about, so we can in turn build larger and more complex ones. It’s this constant interplay of building-out-and-abstracting-away that makes coding such a lively activity, and perhaps so simultaneously sundry and magical.” All programming employs abstractions that simplify complex problems; they make both livecoding and most serious application development possible; most programmers use at least their operating system’s API to build windowed applications, if not other code libraries.

For similar reasons, John Nyhoff bolstered livecoding’s claim to liveness by arguing against conflating spontaneity with liveness, highlighting yet one more understanding of “live.” Just as not all jazz is improvised, most theater is live while following a preset script. Liveness in traditional drama is the actor’s ability to make a preexisting script seem naturalistic, as if he or she were the character speaking the words and feeling the emotions. Nyhoff consequently stated that “in much theatre and most programming, the script/code is authored through execution/performance. … Sorensen’s work … constitute[s] a kind of temporally compressed re-staging of the process of the composition, the programming.” Nyhoff emphasized that the liveness of livecoding was the performative and demonstrative quality of the coding, rather than whether or not the text produced was composed entirely on the spot.

While something of a special case, then, livecoding raises broader issues for Critical Code Studies, especially questions related to the definition of programming and the visibility of code. As John Bell pointed out, livecoding further applies pressure to the valuation of “scripting” over “coding”: the former is not considered “real” programming because of the use of high-level languages for small tasks. The Ruby on Rails framework was promoted using an “amazing and carefully scripted series of demo magic tricks,” in Jeremy Douglass’ words: on a terminal window projected for an audience, a few scripts generated a basic but complete blogging application. This demonstration blurred the lines between live and preplanned coding, and programming using highly-specialized environments like Impromptu and simply using a framework. What makes livecoding interesting, then, is the question of what coding and access to code really allows the reader―and thus, livecoding gets at the heart of Critical Code Studies’ investigation of what code means and does.

Reply by Amanda French on March 1, 2010 at 11:37am

I did notice that you made one or two comments about Sorensen changing the code along the lines of “Was that note not right?” – comments that sounded a bit odd to me given that the performance was so clearly improvisational. His changes, in other words, always looked to me to be changes for the heck of it, changes that constituted the performance, not changes striving for some ideal. The notion of “sheet music” doesn’t apply here, as it wouldn’t apply to a jazz musician or a bluegrass picker. Even the name of his environment, Impromptu, makes that point. Raises the question for me precisely of whether a livecoding session that did consist of simply typing in an existing program would be as compelling – I think it would definitely have its points of interest, actually. Or what would the livecoding analog be to a non-improvisational live performance of music?

Reply by Mark Marino on March 1, 2010 at 2:49pm

Although, I’m tempted to speak just about Sorensen’s code and your play-by-play, I find myself returning to an earlier moment in the video. As you set up live coding, you move from Holden Caulfield’s awe at the tympani player (whose music flies off the stand in your version) to this meditation on live instruments. Thinking of Amanda’s intervention, I can’t help but wonder what the sheet music may be. Could it be the specifications of the coding environment? Could it be some starter loops?

I like, too, this distinction Amanda is trying to flesh out about exactly what kind of “live” performance this is. (Any good articles on “live coding” for the bib?) During one cut, you say, that performances seem more real, while the screen shows a reel-to-reel recorder (1:52). This Glenn Gould move establishes a central ironic tension in the video (with all its spliced together, remediated film) within the “live” of the live coding and perhaps also affecting the executed code we’ve been discussing in Week 4.

In fact, this entire discussion seems to speak to Wendy’s chapter, as we see Sorensen’s magical code that executes as he changes it, when we marvel at his mysterious cosr command. Is this “the erasure of execution”? Is this the fetishization of coding? Is this sourcery? How is the code (un)like the tubes of the pan flute? Are there moments in this code where Sorensen seems more slave than wizard? What does it mean to be a slave of the coding environment you yourself built (I know, asking any code monkey, ask Microserf)?

Also, I wanted to say how iconic this video is for CCS - the way we watch you build your reading live and unedited, observing aspects of the code, its effects on the output, and then developing your reading on how Sorensen is “playing the comments and parameters” or how he is ending his song by commenting it out. You engage with the programming language and environment, the programmer, the output and processes, as well as the code itself. You contextualize your examination in electronic music and live coding, while gesturing toward larger issues, such as the real and authenticity as well as programmer as performer/composer/musician. Maybe for some, our comments are taking the music out of the code, or hearing music in the windmill. I think, too, about all the on-the-fly interpretations we’ve been working to produce, as in Week 3. Here are the postcards we are sending even if we are not too sure how to address them.

Reply by Stephen Ramsay on March 1, 2010 at 3:20pm

I see what you’re saying, Amanda. Honestly, though, I think those moments in the film are more about my own conditioned response as a programmer. If I saw that first comment in a source file, I’d assume there was something wrong and the programmer was “commenting it out” so that it didn’t interfere – cauterizing the wound, as it were. I’ve done it a thousand times. It was really my first thought.

But of course, as you say, that’s not what Sorensen is doing at all (as became clear to me later on, when the comments become almost like keys on a flute or stops on an organ).

As far as improvisation goes, yes. But can’t you hit a “wrong note” while improvising? That is, a note that is “wrong,” not because it fails to conform to a preset pattern, but because you didn’t like it, or it didn’t work, or you changed your mind? I really had something like that in mind.

Here’s one thing, though: If I’m improvising on the harp, say, and I hit such a note, the moment is gone. But if I set up an oscillator to start generating some sound wave, I can change it and have it start doing something else. This seems to me a difference between computer music as it is usually conceived and what a bluegrass picker does (unless you really are “playing your laptop” like the gentleman demonstrating GarageBand’s “musical typing” feature in the video).


Reply by Amanda French on March 1, 2010 at 6:18pm

Sure, you can hit a wrong note in improvising, as I know you know, though any skilled improviser will leave a listener unsure as to whether that really was a “wrong” note. (I, by the way, am totally unable to improvise.) But I don’t quite get the distinction you’re making in the last paragraph between instrumental improvisation and computer music, if you’d care to elaborate.

Actually what I found myself thinking about was degrees of improvisation in music: at the symphony everyone’s usually reading sheet music, which they’d almost have to, because the music is so complex. But your basic rock band isn’t being any more improvisational than a symphony, usually, because they’ve just memorized what they’re playing. Some, of course, do, especially jam bands. But jam bands and jazz bands and blues bands have really very simple structures (logic) that soloists improvise their complexities over: the musical foundation is simpler, and that’s what enables the improv.

But again, it came back to livecoding as an art, for me: I couldn’t, off the top of my head, think of a form of livecoding that would be analogous to a symphony. Probably it would have to be a massive group endeavor, like that of a symphony, where you take all the code for an existing program and get a hundred coders to type it in, live. With live unit tests, which I know you love. :)

Reply by Max Feinstein on March 2, 2010 at 4:50pm

I’ve isolated some Impromptu code and posted it here for a further examination of improvisation via the random function. I have to admit that posting this code is slightly uncomfortable because the snippet is so lifeless in this form. Compared to its traditional context, which involves continuous code execution, sounds, and constant modification from the author, this snippet is just a static chunk that doesn’t do anything. I suppose this critique is much like a photograph in that it captures a brief glimpse of something “alive” and allows viewers to experience the object in a different context. That said, an Impromptu example, complete with commentary, from our favorite free encyclopedia :

;; first define an instrument
(define dls (au:make-node “aumu” “dls ” “appl”))
;; next connect dls to the default output node
(au:connect-node dls 0 *au:output-node* 0)
;; lastly update the audio graph to reflect the connection
(au:update-graph);; play one note by itself
(play-note (now) inst 60 80 (* *second* 1.0));; play three notes together
(dotimes (i 3)
(play-note (now) inst (random 60 80) 80 (* *second* 1.0)));; play a looping sequence of random notes
(define loop
(lambda (time)
(play-note time inst (random 40 80) 80 1000)
(callback (+ time 8000) ‘loop (+ time 10000))));; start the loop
(loop (now));; stop the loop by defining loop to be null
(define loop ‘());; define a new loop to play a repeating sequence of notes
;; with a small random twist
(define loop
(lambda (time pitches)
(play-note time inst (car pitches) (random 40 80) 8000)
(callback (+ time 4000) ‘loop (+ time 5000)
(if (null? (cdr pitches))
(list 60 63 62 (random 65 68) 68 59)
(cdr pitches)))))(loop (now) ‘(60 63 62 67 68 59));; stop the loop by defining loop to be null
(define loop ‘())

My interest in this code is the prominent role that “random” plays in the piece. There’s an interesting tension between the rest of the code, which seems precise and systematic, and this intentional randomness generated by the computer. For me, music isn’t ordinarily composed “randomly,” nor are any sounds produced randomly. For example, every time a timpani player strikes his drum, the resulting sound is precisely what was expected. Perhaps the sound wasn’t what the musician intended, say if he missed hitting the desired spot on the drum, or if the head has steadily de-tuned from use, but these variables can all be predicted. On the other hand, when a livecoder calls for a random X or Y or Z, nobody can really predict what the computer will generate.

Of course, improvisational pieces can be random, but probably not in the same sense as random in the above code, for even improv pieces typically follow certain guidelines (chord progressions, rhythms, etc), as Amanda notes above. I’m curious if anyone else is intrigued by the implementation of “random” and what it introduces to livecoding that is absent from all other musical performances. Or if anyone would make the argument that instruments other than the computer (e.g., Impromptu) are also random?

Reply by Andrew Sorensen on March 3, 2010 at 4:02pm

Obviously randomness is, by definition, indeterminate. However, it can also be thought of as an abstraction layer. “Random” allows you to abstract away detail without requiring a complete model of the underlying process. This is hugely important in livecoding where your ability to implement complex processes is hindered by task domain (i.e. musical) temporal constraints. Of course “random” when used in my work is usually highly constrained - to a particular pitch class, rhythm set, etc.

Of course I often do build a model of the process under investigation which is stored as library code and called instead of “random”. However, in practice I have found that “random” is often sufficient - and has the added advantage of being generally applicable and easy for an audience to comprehend.

It’s worth pointing out that while nobody can predict precisely what the computer will generate, the probabilistic constraints imposed give me a very good understanding of the approximate result that I will get. When I absolutely need specifics I use a determinate process.

The constrained indeterminacy is part of the fun of working with generative systems, you never know exactly what you’re going to get. Part of the skill is in massaging an indeterminate system towards an aesthetically appealing outcome.

Reply by Mark Marino on March 4, 2010 at 9:18am

I was drawn to a particular line in your answer:

Of course I often do build a model of the process under investigation which is stored as library code and called instead of “random”. However, in practice I have found that “random” is often sufficient - and has the added advantage of being generally applicable and easy for an audience to comprehend.

I’m not entirely certain about the relationship between the library code and the random process. Can you elaborate on that?

More importantly, I’m interested in your attention to the audience’s comprehension, as that is an issue that seems to come up in CCS a lot, from the initial arguments that code is not meant for human readers (we’re past those, I think), to our more recent discussion of the display mechanisms through which we should analyze code (should it be more similar to the highlighting, color-coded interfaces programmers are using – or teletype and punch cards?).

As we saw in Stephen’s reading of your coding performance, audiences (without the benefit of an O’Reilly book at hand) might at times be lost in your code. I can see now that this is a central realm of play in “live coding,” that this is part of the delight, of watching the magician at work (Wendy Chun’s image of sourcery keeps returning!).

To what extent do you see this as a performance that you want to make accessible and to what extent is the fun of live coding (both watching and performing) and game of catch-me-if-you-can?

Reply by Andrew Sorensen on March 4, 2010 at 10:40pm

Actually I’m not overly concerned with the audience’s ability to read the code. At least, it is always subservient to my ability to express my ideas as fluently as possible. That said, I do try to provide something that is reasonably transparent. It’s also important to keep in mind that audience understanding is multi-dimensional. In particular it’s worth bearing in mind that a programmer well-versed in Lisp but with no music theory knowledge may understand the syntax and semantics (program->process semantics) but fail to comprehend their relationship to the task domain (i.e. the musical outcome).

For me the projection of code is largely about building a trust relationship with the audience by displaying a level of engagement and human agency. Yes I am doing this live, and no I’m not just twiddling my thumbs–here’s the proof. Without the projection it becomes quite difficult to assess the level of human agency in laptop performance. After that initial trust is established, the code becomes less important. In fact, in a lot of my audiovisual performances (where visuals are drawn over the top of the code), it becomes harder to see the code as the performance progresses. Audiences don’t seem to mind because the early part of the performance has established the trust.

I agree with Stephen’s post that displaying the code doesn’t really give access to the performer’s mind - at least not in any deep sense.

Reply by Jeremy Douglass on March 8, 2010 at 11:37am

I very much appreciated the way that the coding environment functioned as “proof” in this video. For example, every time the screen briefly flashed orange the running code was being updated, correct?

For me, those orange flashes were evidence, like seeing a percussionist’s stick rising high – I might not understand how the set of percussion instruments are played, or even their names, but I had a visible events that I could use to tie together changes in what I heard with action of the performer.

Reply by Stephen Ramsay  on March 4, 2010 at 9:54am

;; first define an instrument
(define dls (au:make-node “aumu” “dls ” “appl”))
;; next connect dls to the default output node
(au:connect-node dls 0 *au:output-node* 0)
;; lastly update the audio graph to reflect the connection
(au:update-graph);; play one note by itself
(play-note (now) inst 60 80 (* *second* 1.0))

This code is really imagining audio units and softsynth components exactly the way they are imagined in Max/MSP and Puredata – as nodes on a network that are connected in various ways. I gather that ChucK works the same way. So at some level, there isn’t that much difference between the way Impromptu/ChucK imagines a synthesizer and the way Max/Pd does. (I don’t meant to elide the differences completely; I’m just nothing that the various environments tend to use the same metaphors for thinking about sound synthesis – metaphors that further reflect the way the hardware “looks” or “works” in the world).

But watching livecoding with the textual interfaces seems to me very different from watching them with the “visual programming” interfaces. For me, the former seems more “miraculous” somehow (even though I’m fully aware that underneath they’re both doing the same thing). I would even go so far as to say that text->sound invokes ancestral memories of spell-craft, as well the western longing for the “word made flesh” invoked so well by Wendy Chun in Week 4.

Really, I think this is ultimately what I’m trying to get at with all of this. I’m not sure that “show us your code” gives us “access to the performers mind.” I am, however, quite sure that we regularly make this connection with code, because text/code holds such an important place in our culture.

Reply by Daren Chapin on March 2, 2010 at 7:13am

It should be noted that [most] of the interactive advantages in the environment above are mostly to do with merely having a language that supports a REPL  (read-eval-print loop) rather than the specific utilization of Lisp’s (in this case, Scheme’s) rewritable S-expressions.

Having said that, macros are playing an active role in how Impromptu works. The ‘cosr and ‘sinr functions can be found in the Impromptu wiki. They are functions that oscillate the beat around a central point with a defined amplitude and cycle. See for instance: 

cosr .


(cosr 70 10 .5)

will produce an oscillation with center 70, range 10 and cycle 0.5.

But because that oscillation expression is being passed to a macro it is being rewritten rather than evaluated right away as a function, which allows setp (a macro; see setp ) to actually perform that oscillation over time.

Hence a construction like this:

(setp zeb1 *smd2:comb1:damp* (cosr (cosr 70 10 .5) (cosr 10 10 .5) 2))

is setting up an oscillation whose center and range themselves also oscillate. The magic of macro rewriting allows Impromptu to rewrite these expressions and delay their evaluation, but note that the composition (in the function sense, not the music one) still has to be explicitly wired together by hand.

Back to my original point about just needing a REPL, and to see another way in which this could operate, consider Haskell. Despite being a pure, statically-typed functional language, Haskell ends up being a good choice for tasks such as livecoding. The reasons for this are hard to explain, and I could attempt a whole separate posting on this, but they have to do with the ease with which one can build functions out of other functions by using combinators (a form of higher-order function), in a way somewhat-but-not-really analogous to the way Lisp macros work.

The advantage of programming through a combinator library is that the whole is built out of parts declaratively rather than imperatively. Most people’s experience with most programming languages (including Lisp, although Lisp is good at supporting multi-paradigm programming) is through use of imperative code, especially where IO is concerned, so this is hard for most people to envision. A good example in a slightly different domain (graphical animation) is something called Functional Reactive Programming, an idea created by Paul Hudak (Yale) and Conal Elliott (Microsoft Research). Their paper  is somewhat technical and requires a lot of Haskell depth, but Conal has a tutorial post here  that is much more instructive. What’s important here is the natural, declarative composability of the functions, which you can see in how easily each of the successive animations is defined from the prior one.

Another issue worth mentioning is that Haskell has strong, static typing, so all statements in Haskell are completely type safe. This means that it isn’t possible to write functions/programs with type errors and get them past the interpreter, something that I have to imagine is a serious concern in a livecoding environment. As far as I can see in the Sorensen video, we never get to see an outright runtime error get sent to the Scheme environment. If the livecoder mistypes something and it generates an error at runtime, what happens I wonder? Does the music just stop? Or is there some sort of exception handling and stack unwind-protection to prevent this?

Here is a presentation (including several embedded videos) about livecoding in Haskell . (You can kind of zoom through the kooky BBC segment on slide 5 if you want, although starting around 2:00 one of the livecoders is using Haskell and the other one is using SuperCollider I think.)

Finally, one of the things I find interesting is that the Impromptu-style livecoders seem to prefer building a large, evolving program and continuously [re-]evaluating its pieces, where the approach with Haskell seems to be to build a domain-specific language (DSL ) and then modify it from a command line. There’s no particular reason for this a priori except maybe that Haskell is a really good environment for building DSLs, at least in comparison to Lisp. You’ll notice how specific and compact the language to change the sound patterns is by the end of the presentation.

Reply by Andrew Sorensen on March 3, 2010 at 5:14pm

Actually, Impromptu isn’t REPL based, at least not in any standard sense. It is interactive, which as you say is a standard attribute of REPL environments, but the degree to which a REPL makes an environment “live” (as in live coding) is debatable. It is certainly possible to affect change in the runtime system through a REPL but to do so with any temporal accuracy is a completely different question. In other words you need a real-time environment with a suitable semantics for time, a determinate concurrency architecture, and real-time managed memory (i.e. incremental/concurrent GC).

I would argue that Impromptu’s primary “liveness” attributes are its “first class” semantics for time and its co-operative concurrency model - “temporal recursion”. If you’re interested you can read more here:

Functional reactive programming is certainly another interesting option for livecoding although I’m not convinced that a synchronous approach to concurrency (which is basically what FRP is) is the best approach for livecoding. Although the ChucK language follows a synchronous approach and it is popular amongst livecoders, so time will tell. Impromptu does include a variety of runtime safety checks but I agree that static type checking would also be nice for livecoding. I have been making some moves in this direction. Since v2.0 Impromptu includes a basic JIT compiler to x86 which includes some basic type inference support. This is a new project for Impromptu but is coming along quit quickly.

I can’t say that I agree that Haskell’s DSL capabilities are superior to Lisps, but isn’t it great that people choose to work in such different ways. Let a thousand paradigms bloom I say!!

FYI: It’s Alex MLaean working with Haskell in the video, he’s been livecoding since 2000 (5 years before I started) and originally livecoded in Perl. Incidentally Alex is also using supercollider for scheduling and for all audio processing. Haskell is being used to generate messages to send to the supercollider server.

Reply by Daren Chapin on March 6, 2010 at 8:32am

After reading through the implementation document I think I understand how this is architected much better. Indeed, what I was trying to articulate with REPLs was that this was possible in any language environment that supported interactivity, though what I imagined when I saw the video was that we were seeing an emacs buffer and that there was a REPL-like command line á la ghci attached to the session “off camera”. Because the whole architecture is based on passing serialized messages to the scheduling engine, it really doesn’t matter how you do it as long as you can. In other words, a different language environment could still contribute to a performance as long as it had its own concurrency model and could be made to serialize sexps and send them to the task scheduler, is that right?

I have to think a bit more about the implications of using FRP with concurrency, but one of the other things I was curious about in seeing the video and which I don’t think I articulated well in the first post is: what happens if you make a mistake? You said there was some basic syntax/type/soundness checking there, but certainly nothing along the lines of what you would be able to have in a strongly typed language. So concurrency issues aside (the possibility of creating a race condition by messing up one’s temporal reasoning) how much care does one need to take in practice to not create a syntax error or omit an argument to a macro, and what consequences are possible if you do? Are you likely to just halt a local process (which would stop sending whatever component of the performance that process was responsible for) or is it theoretically possible to send something that would break the task scheduler and–proverbially and literally–stop the music?

The above isn’t just a technical question, though. I think it speaks to other questions here about the performative nature of the exercise. Implicit in any live virtuosic performance is a kind of contract with an audience that what you are doing takes skill and there is there is always the possibility of a mistake, which contributes excitement to the performance: a juggler can drop a ball, an actor forget a line, a trapeze artist miss a catch. I think in one of the Haskell videos you do see someone submit something at the prompt that is ill-typed, and you see the interpreter complain. So how much is this a concern for the livecoder, and do you think the possibility of error is conveyed to the audience as part of their engagement, or is it accepted more as something magical?

Reply by John Bell on March 4, 2010 at 3:05pm

I’m a bit handicapped here because I don’t know lisp, but one point I’m curious about is the heavy use of macros. Really, I feel like it’s the macros that make this work as a performance. If Andrew was forced to write out the expansion of cosr every time he used cosr in the code, the viewer would be more apt to get lost in complexity and the timing of the performance would bog down a bit. Similarly, it’s also helped along by using Impromptu as an environment and what I assume were a lot of keyboard shortcuts and code completion tricks that made several lines of code appear on the screen instantly during the performance.

But the question that keeps coming to mind is: is it programming?

Ok, obviously it is programming, in all senses of the term. But does a livecoding performance convey the experience of programming? The vast majority of the programming that goes into producing the music we hear isn’t visible anywhere on the screen. Now, this would be true in any case given the layers of software between Impromptu and the hardware, but things like macros (or functions, objects, or libraries elsewhere) that are part of the language being typed on the screen but do not actually appear hide a lot of complexity from the audience that could plausibly be exposed.

I do want to make clear that this isn’t intended as a value judgment on what’s going on on the screen (the whole “programming vs. scripting” issue where some people seem to think that you’re not hardcore enough if you don’t write your AI in assembly). It’s more a question of perception. I think that a lot of the “miraculous” feeling that Stephen mentions–and I also feel, by the way–comes from all of the prep work that’s done on the language and interface long before the performance begins. Now, when I’m coding my own projects, I never get anything close to that miraculous feeling because I’m actually going in and building out all the little functions and widgets that actually make the code tick as I’m working. So I wonder what the relationship is between programming-as-performance and programming-as-making-things-go, and how finding that relationship can be used to inform other questions of reading code (including, maybe, programming vs. scripting).

Reply by Mark Marino on March 5, 2010 at 2:31pm


I keep returning to this notion in Wendy’s chapter, the flattening out of time and space, or “substituting space/text for time/process” (6)? In her chapter, she identifies the ways in which code-as-performative (as language that does) comes to efface the execution of the code. As she “state[s] the obvious, one cannot run source code: it must be compiled or interpreted. This compilation or interpretation―this making executable of code―is not a trivial action; the compilation of code is not the same as translating a decimal number into a binary one” (6). At the same time she asks “where does one make the empirical observation?” Our “live coding” conversation seems to be hovering around this space/text-time/process axis and also questions of the magical temporal relationship between the text and the process.

When I first watched Stephen’s video, I thought, a-ha! here is the moment where that time in between code and execution has been erased. Sorensen’s changes to the code are being executed in real time.

However, after reading through these threads and becoming more familiar with the distinctions that Daren draws out, I see that the live-ness of this coding is a function of the delay between Sorensen’s changes in the code and their execution.

Your discussion of the much less magical experience of grinding out programs seems to speak to this as well.

Also, I can’t help but imagine a spectrum something like this:

analog physical instruments
digital physical instruments
digital software instruments
patch-based environments (puredate, max)
REPL/functional temporal recursion
imperative programming

(Can someone build out this spectrum with a bit more accuracy and detail?)

In other words, the spectrum runs between the person who makes music (or sound) on an instrument through the physics of their body interacting with another physical object to the person making music by writing programs in an imperative language. [No doubt, this is an over-simplification, too.] Within that spectrum are different foci of audience enchantment.

What I keep coming back to is that it is not amazing to see someone merely press a piano keyboard, strum an electric guitar. Usually in that case, we are interested in accuracy, speed, dexterity, creativity of selection, et cetera. Nor is it impressive to see one drag a loop into a Garage Band composition. It does seem magical for someone to write: (cosr 70 10 .5) and then to change a parameter. And certainly there is a certain “And then the Creator made drums and they were good” aspect of:

define drums
(lamda (beat)

Here I’m thinking of that balance Sorensen is striking between legibility and uncertainty, between revealing his selections through clearly named functions to creating less recognizable changes by changing multiple sets of parameters or nesting elements. That “lambda” above reassures me of the mathematics of the process, of the computation, of the unknowable magic, not least of all because it is followed by:

(setp zb1 *smd2:comb1:damp* (cosr (cosr 70 10 .5) (cosr 10 10 .5) 2))

which includes those tantalizing nested cosr calls that I would not be able to (easily) process – at least not in the pub atmosphere that shows up in a video that Daren pointed us to.

This is not to say that someone couldn’t know exactly what Sorensen is doing. That actually makes the experience more like listening to Jazz, which satisfies multiple levels of understanding of the complexities of the performance and the improvisation. It is to underscore the role of (the extent of) natural language structures within the performance of this livecoder.

Reply by Mark Marino on March 5, 2010 at 4:10pm

The more I think about this, the more I realize that these qualities (accuracy, speed, dexterity, creativity of selection) are also key to my expectations in live coding - since someone visibly fumbling at the keyboard, scrolling hopelessly through lines of code, adding lines that obviously had no effect, writing lines and then deleting the instant their effects are experienced, or scripting efficient or needlessly circular processes would not be half as entertaining.

In other words, Sorensen’s performance has a virtuosity, so it is instructive to imagine what bad live-coding might look like in order to more fully examine this art.

Reply by Jeremy Douglass on March 8, 2010 at 11:52am

Dear John and Amanda,

My first response to John’s question “is it programming?” is strongly in line with Amanda’s comment on “degrees of improvisation in music” – I think of them as in some way the same question, about what constitutes authentic liveness, authentic engagement, and what the degrees of preparation and problem-factoring are that we see, whether in Jazz or jam bands or livecoding.

I think the broader question is really interesting to apply to code. To what extent is any act of programming (taken from a huge range of examples of people writing code) “making-things-go”? There is a huge amount of performative coding out there. For example, the whole Ruby on Rails culture rallied around an amazing and carefully scripted series of demo magic tricks. (“Voila!”)

Of course, you can argue that code generation and frameworks are a useful and productive way of focusing on a problem space – but that makes Rails and Impromptu start to have a lot in common. Are pre-written macros and libraries ‘cheating’? At what point do we exit liveness? When we draw on scripted elements or libraries of things that previously ‘worked’? When we operate exclusively in a space without exploration, without the possibility of discovering something new? When we have trivial variability but are essentially prevented anything from going wrong? When there is no variability at all? My sense is that different people will draw different lines in the sand – I’m most interested in tracing the continuum.

Reply by Daren Chapin on March 8, 2010 at 4:26pm


This is a great question. As one way of thinking about this, I propose that (borrowing from Clarke) any sufficiently advanced macro library is indistinguishable from demo magic. One abstract way of thinking about programming is as a process of constructing maps between domains, for example:

human idea <==> code/algorithm  <==> results/action/output
(what to do)             (how to do it)                    (what is ‘done’)

I think we are accustomed to using the complexity of those maps as a proxy for our sense of the degree to which ‘coding’ vs. ‘cheating’ is going on. For instance, consider the following progression of actions and proposed interpretations:

(person moves volume lever up 20% on sound mixer board)

Not programming. Now change to a virtual mixer with an API (“Recording on Rails”):


Sort of looks like programming, but is there any qualitative difference between that and physical version before? What if roles were reversed and it was dragging the GUI lever that called this function?

mixer.mixers.each {|mxr| mxr.volume.change(+20.percent)}

Same result as just changing the master volume, but feels programing-like because there is iteration on the individual levels.

mixer.mixers.each_with_index{|mxr, i|


This definitely looks and feels like programming now, although I could maybe accomplish the same thing physically with the careful use of a straight edge. But it’s trivial to modify the code to the point where I couldn’t; the individual mixer levels could be some very complex function I’ve built out of splines called ‘splunge.’ But once that setting proved useful the code would almost certainly land in a library and look like this at the next session:


and we are back to something that doesn’t look or feel like programming again - equivalent to my striking a preset button on my electronic mixer board. The power of libraries and macroization brings us uncomfortably close to collapsing the distance between the idea and implementation domains, the Uncanny Valley of coding. When a Rails programmer types:


and the code is literally isomorphic to the way the idea would be described in English, it is not hard to feel as if there any coding is going on at all.

And yet this kind of heuristic simplification is one of the primary goals of programming, at least insofar as programming is a social exercise. Libraries, extensible classes, polymorphism, macros, code generators, metaprogramming, domain-specific languages, parser combinators: all of these mechanisms aid us in transforming what once seemed like complex maps between domains into ones that are easy for us to reason about, so we can in turn build larger and more complex ones. It’s this constant interplay of building-out-and-abstracting-away that makes coding such a lively activity, and perhaps so simultaneously sundry and magical.

Reply by Jeremy Douglass on March 9, 2010 at 2:27pm


Your Clarke paraphrase and discussion of “demo magic” resonates for me with discussions of magic in Wendy Chun’s Week 4 presentation . It is also really interesting how effective macros or library code can be both mystical an an obscurantist way (“what is cosr?”) but alternately can seem like the most straightforward of elocution – sincere, or, as Stephen cites Holden Caulfield, “not phony.”

You use the phrase “Uncanny Valley of coding” to describe isomorphism to English – a provocative idea. The “uncanny” part resonates for me with the uncanny experience of reading the not-code of mezangelle in mez’s Week 6 discussion . More generally, I thirevulsion  or some other aesthetic crises awaits as code and natural language become isomorphic is an interesting one to explore – it leads us past syntactic sugar  and macros, and straight to the strong form of that isomorphism:natural language programming (NLP) . But in my experience, what happens to beginners and intermediate users of NLP languages is not revulsion – it is frustration, but also a kind of superstitious thinking that assumes extremely broad isomorphism from extremely narrow examples, and is then constantly frustrated when these expectations are not met. In other words, learning an NLP language is a Loebner Prize  Turing Test in slow-motion, with all the joys and disappointments.

Reply by Noah Vawter on March 5, 2010 at 5:48am

# Give us access to the performer’s mind, to the whole human instrument.

His or her body is where? Is the mind the whole human instrument?


Reply by Mark Marino

>How is the code (un)like the tubes of the pan flute?

The flute’s tubes offer a carefully-contrived space where the denser material, bamboo, confines the movement of air in a mysteriously ordered way – it resonates with a mostly sinusoidal waveform. An electrical circuit is a similarly-contrived space where, in place of fiber and air, metal confines the electronic movement. However, the shape and size of the bamboo are sufficient to produce a perceptible event - an audible tone. Electrons move so fast (in this atmosphere…), that their natural oscillation is inaudible to us. So, instead of using electricity as a fluid (as analog synthesizers do), swishing through a space, we construct channels for it, preferring, instead of hearing the oscillation of a single mass, to hearing the collaboration of hundreds and thousands of small canals/channels interacting with one another.

Code is a chance to architect the electrical canals, a way of controlling the flow of water from basin to basin, selecting which basins overfill and spill into their neighbors. Some types of code simulate the single channel of the flute, while others combine these basins into still more complex structures.

Reply by Max Feinstein

>On the other hand, when a livecoder calls for a random X or Y or Z,>nobody can really predict what the computer will generate.

Musicians, such as friends of mine at Berklee, have learned to hear randomness as a result of their experience composing with random elements. Many random generators have distinct patterns. For example, the LFSR can be interpreted as continuously either a) doubling in value, or b) doubling in value, then subtracting a constant. It’s also difficult to obscure randomness - it must be ‘shaped’. This is the idea behind the various “colors” of noise.

> Or if anyone would make the argument that instruments other than the computer> (eg, Impromptu) are also random?

I see randomness used differently in computer and e.g. electric guitar music.

In computer music, randomness tends to be an economic measure, to “stretch out” a pattern. It also tends to be employed nearly continuously vs. more traditional live instruments. For example, in a guitar solo, there is often a tension behind how much the performer is willing to risk to expand the sound, based on his/her ability to “reel it back in”. Random notes are sometimes briefly in bursts. This is randomness that accompanies brief moments of panic or ecstasy, etc. In comparison, I perceive randomness in computer music more like the random noise around us which we create when we measure cloud cover, or count many other things which we rarely perceive (it is noise after all), such as the deviations in the number of people riding a bus from day to day.

That might sound like a “con” of computer music and a “pro” of acoustic music, but it’s not that simple. While learning to play an instrument, it is largely an unknown and random space. Yes, our intuition can tell us where, for example, the higher notes vs. lower notes are on a guitar, but the difference between a tritone and a perfect fifth is a small step with much greater harmonic implications. They’re located next to each other on the guitar, but can not be easily substituted for one another. This is particularly evident when learning to play for the first times, or when figuring out melodies at any stage in musicianship. How many of us have not swept through a four-chord progression, having compiled the first three chords, and search for the “right” fourth chord to complete the sequence? Along the way, we hit many random notes. Even with experience, one may learn a scale, and make a decision between two intervals to resolve a melody, but with a degree of uncertainty about which one to choose.

This also points to an important difference between computer.random() and acoustic randomness: the same critically-confining architecture/shape which enables a guitar to work also has “sweet spots” and “dead zones.” Generally speaking, these are places which are flaneuristically sought after or avoided. I mention this, because I’m not only talking about sonic sweet spots, but also locations on the instrument which are physically easier and harder to play than others. Computer code (typically…) has no such inhibitions in its random function. A simple random function, like a dutiful feline, will attempt to delight you with a minor second interval just as peppily(*) as it offers up a unison.

Reply by Max Feinstein on March 5, 2010 at 3:19pm

John Cage (1912-1992), an American music composer who greatly influenced electronic music, incorporated some of the very same notions of abstraction as Sorenson employs. The transcript  from Cage’s speech to his audience before a 1957 performance reveals some of these fascinating parallels between his style and Sorenson’s approach:

Those involved with the composition of experimental music find ways and means to remove themselves from the activities of the sounds they make. Some employ chance ok of Changes, or as modern as the tables of random numbers ussicists in resous to the Rorsch may be roughly divided and the actual sounds within these divisions may be indicated as to number but left to the performer or to the splicer to choose. In this latter case, the composer resembles the maker of a camera who allows someone else to take the picture.(Emphasis mine)

The first highlighted bit, about composers who “remove themselves from the activities of the sounds they make,” reads to me just like the OED definition of abstract (v.):

To draw off or apart; to separate, withdraw, disengage from..

I find that Sorensen achieves exactly this with his music. As he stated above, “[randomness] can also be togct away detail without requiring a complete model of the undrynample, and most striking to me, is the irony of a composer abstracting portionsoI hear was written with intent by the composer, each sound a ete impe“[abstraction] is hugely important in livecoding where your ability to implement complex processe of Daren’s discussion about the advantages of programming declaratively through a combinator library. It’s almost as if the composer who implements randomness s livecoding but painstakingly obvious in its operations). I think this idea is neatly illustrated by the last italicized portion of Cage’s remarks above.

Another bit of Cage’s speech that I find interesting:

Whether one uses tape or writes for conventional instruments, the present musical situation has changed from what it was before tape came into being. This also need not arouse alarm, for the coming into being of something new does not by that fact deprive what was of its proper place. Each thing has its own place, never takes the place of something else; and the more things there are, as is said, the merrier.

Cage’s concept of the interplay between conventional instruments and new instruments (which for him consisted of tape, among other things) can be extended to encompass the digital realm as well. In this case, I’m reminded of N. Katherine Hayles’s idea of intermediation – the transformative process that takes place at the intersection of the analog and the digital. When overlapping Haylesian intermediation and Cage’s calmness about the transformative process that occurs through medium changes (e.g., analog to digital), I imagine the act of producing electronic music to be colored with sort of serene and peaceful emotions. Just a side note, really, but the thought has made livecoding performances all the more enjoyable for me to watch.

Reply by Jeff Nyhoff on March 7, 2010 at 6:06pm

The theatrical side of my double-background is creating some “interference” for me as I read through these wonderful posts. Some of the notions of “live performance” that seem to be operating, here, run counter to the way they tend to operate within theatrical discourses. The “liveness” of “performance” turns out to be a delightfully slippery concept when one sets out to try to pin it down …

First, “improvisational” theatrical performances are seldom as loosely structured as they seem. They are usually very, very well ee.g., by suggestions from the audience – but these elements are tightly constrained: by the way the invitation for input is framed, or by the existing circusexts,” if you wilon these dgerienced perohe impression that the “random” elements are mre difficult to accommodate than they really are for the skilled, experienced, and well-prepared performers.

In fact, this illusion of being “unscripted” and “unrehearsed” (what Stanislavsky called the illusion of the first time”) is part of most conventional and popular theatrical forms. Persons who hae never doe any serious acting ften ask “lie” theatre actoslines?part;making ht after night, is theimpression that it is “un-scripted.” Bll-prepared instrumentalists do this – pretend to be doing much more composing-on-the-spot than theyactually are? It is especially challenging  to see this as an extensively improvisational work when it is a solo work. There are no inputs being taken from the audience. There are no othe musicians’ choices to accommodate, as in the case of a “jam” session.

Similarly, when I have teach several sections of the same computing course and, in each, take students through programming examples by coding on the screen  in front of them, it doesn’t take long before I can do most of the code without consulting a “script/listing.” In fact, at that point, it would actually be harder to try to slavishly follow an exact pre-scription of code, rather than proceedtions from the students regarding parameters, function names, etc., which helps make working through the programming example seem more improvised and participatory than it actually is. (And isn’t this the“end user” experience in general? The *ilusion* of large degrees of freedom and of meaningful and substantive participation , when, in fact, we are tightly constrained in terms of our understanding, our actions, our expectations – including what we come to accept as a “pleasurable” interactive experience?)

In TV news broadcasts, classroom lectures, and many other performance contexts, audience engagement is predicated in part upon keeping the amount of scripted-ness, planning, and rehearsal, *hidden*. And we are well-acculturated to respond to such theatrical forms in the requisite manner, often because we desire h at will reslnislavskian illusionistic theatricalities lurknn orof “live ef audiences to refuse this theatrical positioning and, instead, approach suc  in the way nten to sympon advance that I am inclined to suspect. However, even then, it still seems to me that part of in part upon keeping the amount of scripted-ness, planning, and rehearsal, *hidden*. And we are well-acculturated to respond to such theatrical forms in the requisite manner, often because we desire the pleasure that will result from our ability to believe in an illusory degree of “liveness,” unpredictability, and un-scripted-ness in the unfolding sequence of events.

There seems to be some Aristotelian/Stanislavskian illusionistic theatricalities lurking in our notions of “live performance,” here, in that such theatrical forms ask the audience to forget that there’s a script and accept the illusion of un-scripted, unrehearsed-ness. Brecht, of course, called for audiences to refuse this theatrical positioning and, instead, approach such performances in the way one might listen to symphony’s performance while having the score in front of you.

Perhaps there is more in “live coding” that is not coded/scripted in advance that I am inclined to suspect. However, even then, it still seems to me that part of its appeal for us, theatrically, is predicated upon an “Aristotelian” theatrical notion that I’ve mentioned before: that, conventionally, the script/code is something that precedes performance/execution when, in fact, in much theatre and most programming, the script/code is authored through execution/performance. And, in this sense, I would still – and do – find Sorenson’s work fascinating, in that it would still – and does, I believe – constitute a kind of temporally compressed re-staging of the *process* of the composition, the programming.

One other thing that I love about Sorensen’s work is that it does seem to complicate one other
Aristotelian theatrical notion has I’ve worried has been lurking here for the past few weeks. I’m thinking of that troublesome point in the Poetics that claims that reading the script/code is sufficient as an experience of its effects, without seeing/hearing it performed/executed. It would be difficult to make this claim for Sorenson’s work, whether we are attempting read it “live” or in a static form!


Reply by Jennifer Lieberman on March 8, 2010 at 10:57pm

Jeff, you said exactly what I was thinking as I was reading through these posts: “In fact, this illusion of being ‘unscripted’ and ‘unrehearsed’ (what Stanislavsky called the ‘illusion of the first time’) is part of most conventional and popular theatrical forms.”

I came to this performance immediately after finishing Mark Katz’s Capturing Sound: How Technology Has Changed Music. While analyzing what he calls the “phonographic effect,” (which I will oversimplify by summarizing as the feedback between the creation and apprehension of music once it can have a trace beyond live performance,) Katz points out that the recent recovery of editions of early jazz recordings adhere to later/different recordings much more than listeners anticipated. The aura of improvisation, it seems, was more important than actual improvisation. Which is provocative in itself.

But there is one other thing that my reading of Katz might add here. He examines how dramatically the experience of listening to music changes when listeners cannot also see the expressiveness of the musician’s body, arguing that some results of this include the rise of the vibrato in violin music and the compression of dramatic pauses in recorded piano music. Without the dramatic affect of the body, the sound needs to express drama/tension/emotion in other ways.

The presence and absence of the programmer complicates this in interesting ways, as Noah helpfully discussed above. Although much is occluded from the audience, Andrew can manipulate the affect of his music by letting his cursor pause before manipulating the code again – making us anticipate what he might be thinking, where the music might go, and what tweaks we will see on the screen before us. For me, this was exciting because I could only understand parts of the syntax (I don’t know Lisp, but it has an appearance of readability I will touch on below). It’s interesting to think that a cursor can have that type of affect – but perhaps anyone having a romantically-charged conversation on an IM client has that same feeling of anticipation. As Mark put it, “Nor is it impressive to see one drag a loop into a Garage Band composition. It does seem magical for someone to write: (cosr 70 10 .5) and then to change a parameter.”

The act of typing is so familiar and embodied (a bad mirror-neuron analogy comes to mind) that it feels kinetic; but I wonder how the experience might differ if we were actually in the room with Andrew, looking over his shoulder.

To probe this question a bit further: what if we juxtapose the livecoding with something like the stylophone beatbox: Brett Domino: Hip-Hop Medley

Both the stylophone beatbox and the macro-laden code of Andrew’s composition use recursive functions to make the production of relatively complex musical scores appear (for lack of a better word) succinct. But which one seems more ‘magical’ or more like musical performance? Although the stylophone beatbox is literally black-boxed, it seems ‘easier’ to me because the ‘musicians’ are aloof (and perhaps because they’re reproducing someone else’s music, rather than creating seemingly-original or -improvisational scores).

Stephen, in your video you compare the experience of watching livecoding to the experience of watching a musician. I agree; but I wonder how much of this end-user experience is determined by the implied embodiment of the coder. I personally imagine Andrew typing attentively, thoughtfully reconsidering his code as he writes it. I imagine him sometimes whimsically, sometimes intently making a decisive keystroke that changes the whole nature of the music I’m enjoying. But, if we saw Andrew coding– rather than the livecoding stream – and he were to look as aloof as the stylophone beatboxers, would the music still have the same level of tension? Put another way: is it the incongruity of watching sparse syntactical trees create interesting music – or is it the fact that the play between complexity and simplicity, illegibility and legibility looks like “work” that makes these performances so engaging?

Another question might be: what if Andrew wrote out macros that were completely illegible from the viewer’s perspective? For someone who doesn’t know lisp, like myself, part of what makes the performance so compelling is the tension between readability and unreadability: I can recognize some terms and some conventions of functional scripting programming paradigms, but I don’t know if it would be as interesting if ‘cosr’ really looked like ‘xhsaksjosafhfhsoasd.’ This may seem silly; but I think the issue of the appearance of legibility of a language like Lisp (as I think Stephen suggested in his video) is an important aspect of the experience. I might even go so far as to suggest that there’s something uncanny about it.

Rebly by Jeff Nyhoff on March 9, 2010 at 10:27am

Jennifer wrote:

“Another question might be: what if Andrew wrote out macros that were completely illegible from the viewer’s perspective? For someone who doesn’t know Lisp, like myself, part of what makes the performance so compelling is the tension between readability and unreadability: I can recognize some terms and some conventions of functional scripting programming paradigms, but I don’t know if it would be as interesting if ‘cosr’ really looked like ‘xhsaksjosafhfhsoasd.’ This may seem silly; but I think the issue of the appearance of legibility of a language like Lisp (as I think Stephen suggested in his video) is an important aspect of the experience. I might even go so far as to suggest that there’s something uncanny about it.”

An excellent observation!

And, as it seems you are suggesting, this notion of “recognizability” and “legibility” of certain portions of code is also likely to be highly contingent.

For example, the “cosr” and”sinr” calls were actually among the ones that I found most “recognizable,” “legible.” Similarly, when I teach programming to visual artists, even the ms  these two simple companion geometric formulas are. One assignment I will often have them do is to generate both an audio representation of a particular sound wave and use the same methods and parameters to render also an accompanying graphical wave form representation of the wave, such that, when the parameters change, they can see both the graphical wave form and hear the audio change. My point here is that, for these sorts of visual artists students – new to programming and often math-phobic – Sorenson’s cosr and sinr functions would probably have been the ones they actually would recognize first.

It also occurs to me now that, in the case of Sorenson’s cosr and sinr calls, we only hear the audio representation of the resulting wave. But he could also, easily, have created graphical representations of the waves that he’s building. After all, audio representations of his mathematically modeled/coded waves are no more “honest” or “authentic” than any generated geometric, graphical wave forms using the same methods and parameters would be. Perhaps such graphical representations would be unappealing in that they might demystify the coding to a degree that would undermine its mystique and “theatricality.” But if that is so, then perhaps “obscurantism” isn’t as disfavored as their manifesto claims, and that they really don’t want to “give us access to the performer’s mind, to the whole human instrument” as readily as the statement might suggest upon first reading. Although, I suppose this depends largely on which persons are implied in the word “us.” Is this performance intended for a coterie audience?

Stephen also raised the question of purely textual vs. graphical programming of sound (Chuck, Max/Pd), and I think unduly privileging strictly alphanumeric programming is something to guard against, here. If we’re going to call this kind of programming environment a “textual interface,” I think we need to hasten to add that a program created with Chuck or Max/Pd or any other graphical environment is just as much a “text” and constitutes “code” just as much as strictly alphanumeric code.