by Anders Flodin
Abstract. This paper focuses on the laboratory work in the art of live coding and in the use of Estuary, a browser-based collaborative projectional editing environment built on top of the TidalCycles language for the live coding of musical pattern. The paper explore the manner in which notation with numerals and symbols is encoded, processed and executed are examined with the aim of identifying the perceptual and practical boundaries of presenting notation on screen. The proto-compositions used in the article are composed by the author, Barry Wan – PhD-student in Visual Communication at University of Jan Evangelista in Ustí nad Labem, Czech Republic and Fabrizio Rossi – Diploma Accademico di Secondo Livello in Composizione, Conservatorio Statale di Musica “Alfredo Casella” – L’Aquila, Italy.
Keywords: collaboration; live coding; composition; musical form; Sonology
A number of years ago, I read a speech of thanks by Karlheinz Stockhausen after receiving the Cologne Culture Prize in 1996, which aroused my curiosity. In one of the seven sections that the text deals with, Stockhausen describes the development of electroacoustic music and digital technology and places the composer rather as a director without dependence on an interpreter:
Überlegen Sie einmal, was es historisch bedeutet, daß zum ersten Mal in der Geschichte ein Komponist nicht einfach sagenkann: “Hier ist meine Partitur – sehen Sie, wie Sie damit zurechtkommen. Sie sind der Interpret, Sie sind ja intelligent, es kann auch ruhig die eine oder andere Interpretation überdauern, bis Sie das mal richtig spielen können und keine Fehler mehr drin sind. Ich nehme das in Kauf, denn die Zukunft wird es irgenwan bringen; wenn ich berühmt bin, wird das schon von selbst kommen.” Solch eine Argumentation ist heute Selbstbetrug.
Stockhausen, 1996, p. 224.
(My translation: “Think about what it means historically that for the first time in history a composer cannot simply say, ‘Here is my score – see how you deal with it. You are the interpreter, you are intelligent, one or the other interpretation can easily survive until you can play it right and there are no more mistakes. I accept that because the future will bring it somehow; if I am famous, it will come by itself. ‘ Such argumentation today is self-deception”)
What I think is the spirit in the speech is that the new technology makes it possible for the composer to be eternal where the compositions are preserved in a digital technology and without an interpretation, a composer does not necessarily have to be not only a composer but also a musician and performer to his or her own music. But is it that simple? In this article I want to investigate how a coordination of a musical material – in the form of a score – is put together in a new musical context.
Information the Live coder communicate through Live coding
Composing music on a paper or with help of a computer does not mean that all other music is disorganized. Most of the music created throughout history has been added without these aids and passed on in an oral tradition. But if there is a need to preserve or organize music in one form or another, one encounters early in the history of music both numerals and symbols to illustrate the musical course and in a score. One example is the so called figured bass, a bass part intended primarily for a keyboard instrument with arabic figures indicate the harmonies to be played above it. The system figured bass originated at the beginning of the 17th century and was universally employed until about the middle of the 18th century. It was designed to facilitate the accompaniment of one or more solo voices or instruments. Practice was not always consistent, but the following principles were generally observed: a note without figures implies the fifth and the third above in the given key. An accidental without any figure refers to the third of the chord. Provided that he or she uses the correct harmony, the performer is free to dispose the notes of the chord as he/she likes, i.e. close together or widely spaced. Since the practice of playing from figured bass is now no longer widely cultivated, modern editions of old music generally include a fully written out part of the harpsichord, piano or organ. No written part, however, can be a completely adequate substitute for ’realization’ at the keyboard of the composer’s shorthand; and many written parts of this kind do positive harm by neglecting to observe the conventions of the period.
The notation of the Chinese traditional instrument Qin is based on a system where the forms of abbreviation for the right and left hand strikes are combined with numbers for the seven strings and the thirteen hui – the place where the natural harmonics are produced. Together, all this information forms a symbol, a graphic figure reminiscent of a Chinese character. An experienced qin player can easily identify the significant units. When the qin player talks about different characters, they read out the parts one by one. In Western music latin numerals represent the chord whose root note is that scale degree. A traditional I-IV-V-I cadenza is understandable as content for a musician studying Western music. When coding music two questions arise. Is there a compositional grammar that is common between the written transmitted musical tradition in the form of changes, adaptations and variations and the planned compositional idea presented as a document? and if there are differences, what do they both have in common? The composer and music researcher Fred Lerdahl distinguishes between natural and artificial compositional grammar. The natural compositional grammar is the one in which contemporary musicians and listeners can intuitively orient themselves and which is shared by the members of a musical culture. Composers like to explore the boundary between natural and artificial compositional grammar. By “stretching out” contemporary music theory, an artificial structure is created. Skilled musicians also explore this area as they perform the new music. Through this interplay, the boundary between natural and artificial slowly shifts:
Where does a compositional grammar come from? The answer varies, but a few generalizations may be helpful. Let us distinguish between “natural” and an “artificial” compositional grammar. A natural grammar arises spontaneously in a musical culture. An artificial grammar is the conscious invention of an individual or group within a culture. The two mix fruitfully in a complex and longlived musical culture such as that of Western tonality. A natural grammar will dominate in a culture emphasizing improvisation and encouraging active participation of the community in all the varieties of musical behaviour. An artificial grammar will tend to do dominate in a culture that utilizes musical notation, that is self-conscious, and that separates musical activity into composer, performer, and listener.
The gap between compositional and listening grammars arises only when the compositional grammar is “artificial”, when there is a split between production and consumption. Such a gap, incidentally, cannot arise so easily in human language. People must communicate; a member of a culture must master a linguistic grammar common to both speaking and hearing. But music has primarily an aesthetic function and need not communicate its specified structure. Hidden musical organizations can and do appear. A natural compositional grammar depends on the listening grammar as a source. Otherwise the various musical functions could not evolve in such a spontaneous and unified fashion. An artificial compositional grammar, on the other hand, can have a variety of sources – metaphysical, numerical, historical, or whatever. It can be desirable for an artificial grammar to grow out of a natural grammar; think, for example, on the salutary role that Fux (1725) played in the history of tonality. The trouble starts only when the artificial grammar loses with the listening grammar.
Lerdahl, 1992, pp. 100-101.
Lerdahl points out the importance of the mix between the two grammars and that the trouble starts when the artificial grammar loses with the listening grammar. The terms natural and artificial from some kind of objective point of view are ill-advised, but that they still describe how contemporaries – to the composer and his music – experience the interface between playing inside the tradition and starting to wrestle with something new. And perhaps over time an intuitive understanding of the new, perhaps through hands and thought.
When using Live coding for creating a thought it is the praxis to program with numeral(s) or a symbol(s) representing a sound in time and space to obtain the instruction and function:
Live coders program in conversation with their machine, dynamically adding instructions and functions to running programs. Here there is no distinction between creating and running a piece of software – its execution is controlled through edits to its source code. Live coding has recently become popular in performance, where software is written before an audience in order to generate music and video for them to enjoy.
McLean, A. Griffiths, D. https://www.gold.ac.uk/calendar/?id=2222 
An attempt to systematize the different types of notation in general can be made in two categories: action-based notation and result-based notation. The action-based notation can include e.g. placements of the fingers or various tablature, or impulses to the performer to shape a course that has been outlined by a figured bass or with different graphic curves. The second category, result-based notation, refers to all notation of notes where one can more or less imagine a sounding result without having to be familiar with the special peculiarities of different instruments. The emphasis is on the descriptive function of the character material, where we can calculate our conventional notation on notation systems with all different variants. In the traditional notation the two functions are combined. Where, then, is Live coding as notation in this systematization? Since it is a notation with symbols and numerals, it would be categorized under action-based notation because it is in many ways a tablature.
Materials and Methods
On the website and the entrance to Estuary, the open source software is described as follows:
Estuary is a platform for collaboration and learning through live coding. It enables you to experiment with sound, music, and visuals in a web browser. Estuary brings together a curated collection of live coding languages in a single environment, without the requirement to install software (other than a web browser), and with support for networked ensembles (whether in the same room or distributed around the world). Estuary is free and open source software, released under the terms of the GNU Public License (version 3). Some of the live coding languages available within Estuary are TidalCycles, for making patterns of musical events, and Punctual, for synthesizing audio and/or video from the same notation.
Estuary: https://estuary.mcmaster.ca/ 
In the autumn of 2020, a group consisting of Barry Wan (HK), Fabrizio Rossi (IT) and Anders Flodin (SE) conducted some laboratory work on what happens when coding is preceded by an imaginary plan or an elaborate musical form. The purpose of the study was to map what attachment we in the group had to the musical tradition and what factors influence our choices. The participants gave themselves the task of preparing instructions so that each participant could have their own task in the composition as a whole. It is noteworthy that the task did not force the participants to compose with numerals and symbols. All participants were asked to write down their thoughts, ideas and experiences in a log book that also provided a basis for the laboratory’s textual presentation below.
The composers also agreed to limit the playing time to five minutes, which the group later abandoned to instead double the playing time to ten minutes – after a first test, we felt that the process became too short.
All participants had enough experience to be able to program in Estuary and with a good habit of reading traditional Western music. As an addition to the first three sessions, I have chosen to include a fourth session. This session was part of one of the university’s courses under the guidance of Barry Wan and which was streamed live.
Session 1 – Anders Flodin (Example 1) The graphic design of the composition is reminiscent of a traditional score with a given time axis in minutes and a given arrangement of the distribution of the various parts vertically. All players have well-defined codes that must be entered in the two boxes that each player has to complete. Each part ends with coding “silence”, which is the word that stops the sound process in Estuary. In the left edge there are three symbols under the heading Textures integral. The symbols are taken from a sonological conceptual apparatus and method of analysis and show with the help of geometric figures what kind of complexity is desirable. The symbols are designed based on the idea that the more corners in the geometric figure, the higher the degree of complexity: hexagon = very complex, square = relatively complex and circle = very simple. What is very complex, relatively complex or very simple is subjective and can be interpreted by the player according to his or her perception. I have previously tried the idea of activating the symbols into active action with mixed results depending on the habit and familiarity of improvising with the musicians. A few words must be said about Sonology because I turn the tool for analysis inside out to become active symbols for execution instead. The theory of sonology is mainly practical-pedagogical and it aims to develop a terminology where teachers, students, composers and practitioners can exchange opinions about music as a sounding phenomenon. The focus is mainly on sound rather than opus, phenomena rather than concepts. Central to Sonology is the understanding of sounds as phenomena and music as organized audible structures which later can be described by terminology and symbols.
Session 2 – Fabrizio Rossi (Example 2)
The composition is divided into a time axis left to right and into an arrangement of distribution of parts. The individual part is divided into two levels where it is implied that there are two boxes to be used by the single coder. The respective content of the parts differs from Example 1 and is not so informative in the execution of the details i.e. numerals and symbols. The type of sound from the Dirt-Sample (sample banks) from The Hacked TidalCycles Documentation can be read out in the parts, e.g. seawolf, coins-can, sax. The information also shows whether the sound should be in the foreground, background and rhythmic pattern .
Fabrizio Rossi writes in his log book called Operative observations about composing through coding – for a controlled-alea co-improvisation with Estuary:
4) The functions could be defined in this way and with these Minitydals (These are hypothesis too):
a) Foreground: a predominant “audio element” with a significative sound or a sort of rhythmical character (sitar/industrial/hoover/koy/rave/ravemono/stab/subroc3d/toys)
b) Background: a not predominant “audio element” with a long-time sound, and with not too marked rhythmic character (seawolf/cosmicg/fire/pebbles) or without it (sax/ade/pad/padlong/prog/tacscan)
c) Rhythmic pattern: “audio element” made by a short time sound(s) with a clear and predominant rhythmic character; it could happen:
c1) in high frequencies (coins/bottle/can/psr)
c2) in low frequencies (bassfoo/909/arp/pluck)
Rossi, F. (2020).
Session 3 – Barry Wan (Example 3)
Barry Wan composed a graphic score with brush strokes of red, yellow and blue. No other information was given except that an oral information was given just before the coding session to the participants: “player one follow the red ”, “player two follow the blue”, “player three follow the yellow dotted line”. The time setup was ten minutes. The composition is open and can be interpreted in a variety of ways and differs from the others because it does not describe sound or the course of events other than that the composer oraly distributed the parts to be followed in the form of color.
Session 4 – New Media Winter Semester Performance 2021 (Example 4)
This performance is with a group of Live coders and students from the Faculty of Art and Design at J. E. Purkyne University in Ústí nad Labem, Czech Republic, under the guidance of their tutor PhD-student Barry Wan. The group made an online event 19.00 ECT on the 5th February 2021. The text and the numbers must be understood and performed by the Live coder, which in the given structure is given opportunities to improvise a course and choice of sound type within certain given frames of content and form. The numbers in the left margin show minutes and correspond to the recorded material. It is noteworthy that the instructions or score, unlike the previous examples, are on the same web application down to the right.
The live coder communicate a serie of numbers and symbols indicating specific and/or aleatoric material to be performed by him or her into the group of live coders. The live coder develops the responses of the numerals and symbols, molding and shaping them into the composition then create new numerals and symbols into another series of sounds, a phrase, and continues in this process of composing the piece. The live coder composes in real time utilizing the numerals and symbols to create the composition in any way they desire. The live coder sometimes knows what he/she will receive from the performers and sometimes does not know what he/she will receive – the elements of specificity and chance. The live coder composes with what happens in the moment, whether expected or not. The ability to compose with what happens in the moment, in real time, is what is required in order to attain a high level of fluency with the coding language. Three of the compositions (example one, two and four) are using “the arrow of time” as a compositional strategy and to articulate the musical form. The existence of an “arrow of time” — is a concept drawn from fundamental physics, first formulated in 1927 by the British astronomer Arthur Eddington.
There are two regions the coder/performer utilizes when signing numerals and symbols:
(1) Region one: A place where the coder/performer indicates silence. It is where the coder/performer prepares the start, the phrase for initiation or ending. The same space are also where the written numbers and symbols for sound and phrases are initiated – the place of action.
(2) Region two or the Chat function: A field in front of the coder/performer which allow the performers to communicate during the session. Short messages may include information about the changing compositional process and the typology of sound.
Conclusion and Future work
The overview I have presented is, to refer to the proceeding headline, dizzingly complex: In an article such as this it is not possible to do more than to focus on one area, and then, finding it rather hard to take in the situation in the hope to finding it possible to sketch some contours. One conclusion is when live coding the coder/performer/composer can ultimately only deal with the whole and the experience of a session one would probably not focus on every single element in the music at any given time, e.g. timbre process or rhythmic articulation and manifestation. The focus will probably most likely shift over the course of the session. When studying live coding there are a lot of isolated elements as numerals and symbols. You can of course study them and observe the phenomenon of sound and particular elements in detail. But as soon as you want to make a valid statement of the nature of this element in the context of music, you have to place it back within a whole with intuition, listening for the right moment and to play and communicate with other coders or together with yourself in solo mode. You have to link the element back to the construction of music and look and listen at how it combines with all the other parts of a musical form. If it is desirable to maintain the requirement in the traditional sense that the notation should be easy to write down and easy to read and to be reproducible for others, one can probably fear that “musical graphics” can fulfill musical functions only as long as the contact between the coder and the performer is kept active and these together can establish certain conventions regarding reading. But another input can also be that graphic score, Example 3, can be an opening to an improvisation based on an impression.
Example 1: Anders Flodin, sketches for Study.
Example 2: Fabrizio Rossi, sketches for Study.
Example 3: Barry Wan, sketches for Study.
Example 4: Concerted composition.
Flodin, A. (2020). The Dictionary of Lost Symbols and Numbers, Log book, Autumn, 2020.
Rossi, F. (2020). Operative observations about composing through coding – for a controlled-alea co-improvisation with Estuary. Log book, November, 2020.
Stewart, D. A. (2019). The Hacked TidalCycles Documentation (2019).
Bálint, A. Varga. (1996). Conversations with Iannis Xenakis. London: Faber and Faber Limited. p. 205.
Flodin, A. (2015). Suona, testa, allucinazione, virus: I, II, III. In Piantologi. (ed.) Berggården, S. p. 25-28. Örebro universitet: Föreningen Musikspektra T.
Hambæus, B. (1970). Om notskrifter, Stockholm: Nordiska Musikförlaget.
Karkoschka, E. (1966). Das Schriftbild der Neuen Musik, Celle: Hermann Moeck Verlag. pp. 167-173.
Lerdahl F. (1992). Cognitive Constraints on Compositional Systems. In Contemporary Music Review, 1992, Vol 6, Part 2, (ed.) Moraves P., pp. 97-121. UK: Harwood Academic Publishers GmbH.
Lindqvist C. (2006). Qin, Stockholm: Albert Bonniers Förlag AB. pp. 240-252.
McLean, A., Fanfani, G., Harlizius-Klück, E. (2018). Cyclic Patterns of Movement Across Weaving, Epiplokē and Live Coding (Volume 10, Number 1). Dancecult: Journal of Electronic Music Culture. https://dj.dancecult.net/index.php/dancecult/article/view/1036/941 
de la Motte-Haber, H., Rilling, L., Schröder, J. H. (Hg.) (2011). Dokumente zur Musik des 20. Jahrhunderts (Band 14, Teil 1), Regensburg: Laaber-Verlag. p. 275.
Smalley, D. (1997). Spectromorphology: explaining sound-shapes. In Organised sound, Volume 2, Issue 2, pp. 107-126. Cambridge University Press.
Stockhausen, K. (1996). Sieben Punkte zum Kulturpreis Köln Dankeswort von Stockhausen anläßlich Verleihung des Kulturpreis Köln im Käthe Kollwitz-Museum am 4. November 1996. In Crosscurrents and Counterpoints. (eds.) Broman, P. F., Engebretsen N.A., Alphonce B., p. 224. Skrifter från avdelningen för musikvetenskap, nr. 51. Göteborg: Göteborgs universitet.
Terhardt, E. (1982). Impact of computers on music – an outline. In Music, Mind, and Brain – The Neuropsychology of Music. (ed.) Manfred Clynes, pp. 353-369. New York: Plenum Press.
Thoresen, L. (2012). Exosemantic Analysis Analysis Of Music-As-Heard. Paper presented at Proceedings of the Electroacoustic Music Studies Conference, Meaning and Meaningfullness in Electroacoustic Music, EMS, pp. 1-9. Stockholm, June 2012. http://www.ems-network.org/IMG/pdf_EMS12_thoresen.pdf 
Thoresen, L. (2007). Form-building transformations – an approach to the aural analysis of emergent musical forms. The Journal of Music and Meaning. JMM 4, 2007, section 3. http://www.musicandmeaning.net/issues/showArticle.php?artID=4.3 
Winckel, F. (1955). Klangstruktur der Musik – Neue Erkenntnisse musik-elektronischer Forschung, Verlag für Radio-Foto-Kinotechnik gmbh, Berlin-Borsigwalde. p. 129.
Xambó, A., Freeman, J., Magerko, B., Shah, P., (2016). Challenges and New Directions for Collaborative Live Coding in the Classroom. In Proceedings of the International Conference on Live Interfaces (ICLI 2016). pp. 65-73. Brighton, UK. http://annaxambo.me/pub/Xambo_et_al_2016_Collaborative_live_coding.pdf 
Estuary: https://estuary.mcmaster.ca/ 
McLean A., Introduction to Live Coding and Visuals https://www.youtube.com/watch?v=-QY2x6aZzqc 
New Media Winter Semester Performance 2021 https://www.youtube.com/watch?v=TQwvVk69sSs&t=188s