No man steps into the same river twice

by Anders Flodin

Putting sound and images together into one unit has had an explosive development and led to major changes in the established art scene. Consequently, more and more young artists have chosen to express themselves through sound, music, video, digital images and animation. This text is based on a performance at the 6th International Conference on Technologies for Music Notation and Representation (TENOR2021), hosted by Hamburg University of Music and Drama, Germany, by the international collective of visual and sound artists Auxig based in Ústí nad Labem, Czech Republic. The collective Auxig consists of Polina Khatsenka, Barry Wan, Petr Hanžl and Jan Krombholz.

In this text I will use the word non-disciplinary coined by Chris Locke at Norwich School of Art and Design (UK). Non-disciplinary is a term more appropriate for developing a result or object, regardless of whether this involves one medium or several, and implies less reliance on the existing disciplines. 

Background

After the start of sound technologically coupled to image in 1900, the start of electroacoustic music in the 1950’s was a major turning point in the history of Music and Art. The possibilities of creating Music and Art with different kinds of equipment and tools developed in the twentieth century, and resulted in new forms of film, video arts, mixed arts etcetera. The idea of putting music, sound and image together was not entirely new, but existing works were hard to perform with and quite expensive to use. In the 1980s the personal computer was a groundbreaking new accessibility tool. It not only provided practical benefits, but the so-called new technology also helped change the cultural statues of Music and Art. The technical development meant that others besides the artist himself/herself could both create and consume projections and sounds. The technology is easily accessible and relatively inexpensive, while at the same time having a contemporary expression.

 
No man steps into the same river twice is described by the members in the collective Auxig as follows:

The audiovisual performance implies tactics of comprovisation, where the audio performers use generative projection by Petr Hanžl as a time-based graphic score. Another side of the performance is improvisation by the musicians, where the sound sources are shared beforehand and equally distributed to be used with no limitations, so all artists develop their own authentic language. Auxig collective has a very decent site-specific approach. The recordings and video materials were taken at nature reservation Slavkovský les (Karlovarský kraj, CZ) Ohře river and are reflecting the current state of the river and its surroundings, including the low levels of water, sound pollution from airplanes and factories etc. The concept of the river flow is being reflected by developing a composition intensity from gentle, soft sounds to a massive soundfield.

Auxig https://www.klg-tenor-21.de/tenor/about/ [2021 11 13]

A short description of the performance

The collective sits in front of the screen and the performance begins with a simple but clear reference to rippling water. Horizontal projected blue and white ribbons flicker over a dark surface. It is one and the same middle-ground function layer between sound and image, where the image gradually changes to have a stronger profile similar to undulating aurora borealis. The layers between image and sound are separated from one other, where the projection over time acquires a stronger intensity and a clearer, more independent profile. The layers reunite, but this time they are more intense and narrow. White noise is combined with a clear reference to a rapid underwater sequence, where various objects flicker past. There is very little opportunity to perceive individual details, even if they are vaguely implied in both projection and sound. The projected surface is reshaped into a yellow-green aurora borealis and separated once more from the common middle-ground layer. I can see one of the members scratching on a gramophone and building up a new short iterativ sound object. A high viscosity contributes to the projection and the return of the collected sounds, which is also the final word of the piece.

Analysis

In this short description I have denoted two kinds of layers (middle-ground and foreground) which combine the two artistic expressions music and image. This requires an explanation. In many textures, the brain is able to perceive several, simultaneous layers or structures. One such layer may itself consist of several layer-elements. The layers may have different functions in relation to each other such as foreground or background in visual fields. When a performer gives prominence to one layer or layer-element over other layer(s) or layer-elements that distinguish themselves as being the more prominent, they will be said to have strong intensity of profile. When the same layer has a strong intensity of profile for a certain time, it is said to have foreground function. The layers in an ambiguous, intermediate, or constantly changing position with regard to intensity of profile, are said to have a middleground function. The layers with a weak intensity of profile, and thus less prominence, are said to have a background function.

The description of the performance from the members gives the viewer keys to the collective’s common position and work process. This holistic attitude in the creative process provides the conditions for getting to know one other’s artistic medium and favors a non-disciplinary common platform. With this as a basis, certain common agreements between the performers can be reached, such as use of foreground, middle ground and background, as well as the type of profile and shape as described in the text by Auxig:

The audiovisual performance implies tactics of comprovisation, where the audio performers use generative projection by Petr Hanžl as a time-based graphic score.

Auxig https://www.klg-tenor-21.de/tenor/about/ [2021 11 13]

The audiovisual performance implies tactics of comprovisation, where the audio performers use generative projection by Petr Hanžl as a time-based graphic score.

Auxig’s uniqueness consists of a common world of sound and images that refers to the physical, concrete and material world as a basis in a non-disciplinary direction. 

Questions of interest, albeit too extensive for this work, are what does the practitioner perceive, what does the knowledgeable observer perceive and what does the average observer perceive?

Die Frage, ob ich jemanden mit meiner Musik ansprechen stellt sich für mich gar nicht. Es ist wie der wissenschaflichen Forschung : man versucht, ein Problem zu lösen, aus interesse an der Sache, und ümmert sich nicht um den praktischen Nutzen. So ist auch die Frage, ob jemand das braucht was ich mache, unwesentlich. Ich lebe heute und hier, bin ungewollt Tell einer Kultur, und ich produziere, wird sich mit der Zeit durchsetzen oder nicht. Mann kann die Relevanz eines Kunstwerkes für eine Kultur erst im Nachhinein beurteilen.

György Ligeti https://www.youtube.com/watch?v=4AhKWofVV0E [2021 11 03]

(My translation: “The question of knowing for whom my music is intended, should not be asked. As in scientific research, we try to solve a problem because of our interest in it; and not for its practical application. Then the question of knowing if someone needs the music I produced and which I involuntarily made part of a culture, will only be answered in due time. The importance of a work of art in a culture can only be judged retrospectively.”)

References

Andersson, Lars Gustaf, Sundblom John, Söderbergh Widding, Astrid (2006). Konst som rörlig bild – från Diagonalsymfonin till Whiteout. Sveriges Allmänna Konstförenings årsbok 2006, Bokförlaget Langenskiöld, Fälth & Hässler, Värnamo. pp. 15-95.

Ligeti, György. https://www.youtube.com/watch?v=4AhKWofVV0E [2021 11 03]

Locke, Chris (2006). UK Art and Design Education and Inter-Disciplinary. In Art Studies – Between Method and Fancy, ed. Assoc. Prof. Dr. Arūnas Gelūnas, pp. 61-78. Vilnius Academy of Fine Art Press, Vilnius.

Pound, Ezra (1927). Antheil and the Treatise on Harmony, Pascal Covici, Publisher, Inc., Chicago. pp. 51-52

Rasmussen, Karl Aage (1998). Kan man høre tiden – essays om musik og mennesker, Gyldendal, Nordisk Bok Center A/S, Haslev. pp. 214-222.

Electronic links

Auxig: https://www.klg-tenor-21.de/tenor/about/ [2021 11 03]

No man steps into the same river twice: https://www.youtube.com/watch?v=fKFwq5aJo5E&t=237s [2021 11 03]

Pound, Ezra (1927). Antheil and the Treatise on Harmony: http://waltercosand.com/CosandScores [2021 12 16]

Analysis of Turkar Gasimzada’s “There were noises and tiny bluish – yellow lights”


by Anders Flodin

What follows is an analysis of There were noises and tiny bluish – yellow lights (2020) for prepared piano and midi keyboard/electronics by the Azerbaijani composer Turkar Gasimzada. This composition should not be mixed up with the composer’s previous composition with a similar title noises and tiny-bluish yellow lights (2014).

Introduction

Most people – who, for one reason or another write or talk about music, usually explain that words are not really enough to describe, analyse or at all comment on abstract sounds or silences. Often they usually switch to using just verbal and visual aids to seek to approach the elusive music – this text is no exception. Nevertheless, even if the word is menacing because it has a kind of “power”, the absence of words also has power. The word is one of man’s most important belongings and the way we use – or fail to use – them has consequences for our learning of various human skills, for our attitudes towards what surrounds us, and for our emergency situations, needs and experiences. In this context, however, it is important to emphasise that words are important where I see words and sentences created as a constant attempt to sublimate the language. I’m convinced we will never get to the core of the work, but we can deepen our listening with the help and support of words and sentences. In this analysis I will use an adaptation from the Aural Sonology Project and the method of analysis of sonic and structural aspects of music-as-heard.

Clear observation of the moment – the ear decides

The composition is divided into three clear sections which consist of a constellation of several sentence-fields in each section. The first section is well defined in which the piano tone A is established throughout the first section as a quasi harmonic series with the start on the fundamental while harmonic progressions, rapid broken figurations and processed electronic sounds as object-fields are interpolated above. The presentation of a key center may be generated from a unifying harmonic idea from which musical growth develops.

Later on a transition occurs and a chromatic cluster with minor seconds, frame B– D, and is repeated three times at a decreasing speed, ritardando. This gives a harmonic textural momentum and clarifies the ending phrase containing elements intrinsic to the main body of the sound itself. 

A few words about the processed electronic sounds separated from the composition as a whole are in order. The typology of form-building elements is very complex and very simple at the same time. They meet in a paradoxical, ambivalent union with a perceptually simple overall character. The contrast between (integral) whole and divided elements and the contrast between line and texture are sharpened when some of the sounding entities makes a rising volume, crescendo, which in this context becomes unexpected. Some sounds are sinusoidal sound objects while others are dystonic sound objects – sounds formed by a mixture of pitched elements and cluster of sounds.

The second section changes in content to become more tonally oriented with scattered harmonic intervals both in time and in interval tension with metrically free lines. The section contains chord structures and sometimes monotony develops from overuse of simple devices; yet despite the textural complexity the sonorities are clear and refreshing. No electronic sounds are present in the first part of the section until a voice reads an English text “There were noises and tiny bluish yellow lights…”. The piano flows out in emulated electronic sounds mostly in B as a reference. The idea of a cadential chord effect towards the third section is obvious.


A third section is presented immediately when the English text ends and a new central tone E is established. Rapid broken figurations and processed electronic sounds are interpolated above it and slightly later the section continues with a spoken voice “Whatever I looked at…”.  A chromatic cluster with minor seconds, frame B– D,  as in the first section – but just played once – and a lightly touched harmonic node on the bass string ends the composition.

A few more observations

Every section ends with an accumulation. With accumulation I mean a gradual superimposition of sounds: a sort of crescendo, not in intensity but in quality (more sounds = more colours) which sometimes stops and then starts again. Every section of the piece is well characterized, linked to all preceding and following sections. 

Alternating or discontinuous articulation creates relatively high complexity. Two or three foregrounds combined with one background gives a higher complexity than the opposite case i.e. one foreground with two or three backgrounds. The horizontal interrelations are balanced by the ratio in time.

As a listener I’m able to focus my attention on different aspects of a texture. By doing so, I bring some sonic elements into relief while relegating others to the periphery of attention. 

Concluding remarks

The method of analysis of this work demonstrates the remarkable connection between the prepared piano and midi keyboard/electronics accomplished by Turkar Gasimzade. Gasimzade’s composition reflects an array of aesthetic influences and an exploration of four distinct sound objects; piano, prepared piano, processed electronic sounds and a voice.

References

https://www.youtube.com/watch?v=l6w5MF2h8jI [2021 09 13]

http://www.auralsonology.com/ [2021 09 13]

https://www.turkargasimzada.com/turkar [2021 09 21]

Åt helvete med alla oljud

– mina minnen av Sune Smedeby (in Swedish)

av Anders Flodin

Sune Smedeby och Anders Flodin i Smedby, Södermanland (1985).
Foto: Anders Flodin

”Käre Anders!

Det blir inte många rader idag, eftersom jag är partiellt (men övergående) förlamad i en högerhandsnerv efter kranskärlsoperationen, som jag genomgick den 19 mars. Jo, den blev av till slut, även om jag drog mig i det längsta. Det är mer än ett år sedan jag höll en lektion eller skrev en not. Bättre att dö på operationsbordet än att gå omkring som en zombie. Den sista tiden har varit ett helvete av smärta, men jag har börjat återhämta mig så smått, och ibland skönjer jag en och annan ljusglimt.”

Så inleder Sune Smedeby ett brev till mig daterat den 13:e april 1991. När jag några år senare besöker honom i sin nyinflyttade lägenhet i Vivalla är han märkt av sin sjukdom, ansiktet är fårat och händerna har börjat darra. Hit har han flyttat från sin gamla bostad i Brickebacken där han under en lång tid alltmer irriterat sig över grannarnas alltför volymstarka spelande på Hi-Fi anläggningen, som han uttrycker det. Hans sjukdom begränsar rörelsemönstret men trots den kraftiga medicineringen har han till min glädje återupptagit sitt komponerande. Vid detta tillfälle samtalar vi om en av hans sista kompositioner Dunka död – åt helvete med alla oljud som är en bitterljuv programmusik över folkhemmets bristande musikestetik och illustrerar samtidigt hans egen, outhärdliga situation i lägenheten i Brickebacken – trängd mellan ljudvågorna.

Första gången vi träffades var hösten 1980 när jag studerade vid Kävesta folkhögskola, då i Gamla folkets hus lokaler på Järnvägsgatan 8. Han bar alltid kavaj och undervisade oss elever i gehörs-, musik- och harmonilära. När någon elev intresserade sig för hans kunskapsområden visade han ett stort hjärta för de strävanden som utförts genom kommentarer som alltid var tankeväckande samtidigt som han letade efter halstablettasken Tulo i kavajfickan.

Hans praktiska kännedom utanför handboken inom det viktiga området ”instrument” var ovärderlig. Hans gedigna instrumentkunskaper grundade sig på att han lärt sig spela de flesta instrument inom den västerländska konstmusiktraditionen i ungdomsåren. Jag minns att han vid något tillfälle under en lektion för oss spelade upp någon av sina elektroakustiska kompositioner som han jobbat med för bland andra Miklós Maros mellan åren 1974–75 på kurser anordnade av Elektronmusikstudion i Stockholm. När jag så småningom började ta privatlektioner i kontrapunkt, instrumentation och komposition fann jag en annan person än den jag lärt känna i skolans miljö. Sommartid for vi ofta till hans sommarnöje utanför Tumbo i Södermanland och på resorna dit hade vi livliga diskussioner om musik. Väl framme vid stugan fick jag klyva ved och tjänade på det sättet in för ytterligare några lektioner – pengar ville han aldrig ta emot.

Under flera år åkte jag hem till hans lägenhet i Brickebacken och hade oförglömliga lektioner. Lägenheten var asketiskt inredd och hans ordningssinne avspeglade sig också i möblering och utsmyckning: spinetten stod längs långväggen, på kortväggen hängde två fickur i sina silverkedjor symmetriskt i förhållande till den gamla landskapskartan över hembygden i Södermanland. Lektionerna brukade inledas med att vi satte oss ned vid soffbordet där han kritiskt granskade och nynnade mina kontrapunktövningar med Knud Jeppesens bok Kontrapunkt – på danska – liggande bredvid sig på bordet. Vid tvåstämmig kontrapunktisk sats brukade vi dela upp stämmorna mellan oss och sjöng alltid igenom alla exempel som jag jobbat med sedan sist. ”Det är vokalpolyfoni vi studerar och då skall alla exempel sjungas”, menade han. Trots tidigare hjärtinfarkter och läkarnas återkommande varningar rökte han ivrigt sina John Silver utan filter under tiden han granskade mina exempel, när han inte rökte hade han alltid portionssnus av märket Tre Ankare tillhanda. När lektionstiden var slut följde han mig till tamburen och räknade snabbt ut hur lång tid det skulle ta för mig att gå de dryga 200 metrarna till busshållplatsen. Han var besatt av punktlighet och ett exempel på detta pedanteri var hans tidtagning av mellantider för att få ut en genomsnittlig tid för hur lång tid det tog att gå till hållplatsen. Tyvärr åts hans dagar upp av detta pedanteri som med åren tagit makten över honom.

När hans 50-årsdag närmade sig fick jag i uppdrag att köpa med mig några flaskor vitt vin av märket Soave och sherry om någon styrelsemedlem från Föreningen svenska tonsättare skulle komma på besök. Ingen kom, och jag minns hur besviken han var när jag själv kom för att gratulera honom.

Återkommande berättade han för mig i positiva ordalag om sina studier för Karl-Birger Blomdahl och György Ligeti, i mindre positiva ordalag om Åke Uddén och Lars-Erik Larsson. György Ligeti hade han mött på Kungliga Musikhögskolan i Stockholm 1962–1963 som ett led i den seminarieverksamhet som fanns på institutionen.

Ligeti skriver i boken Three Aspects of New Music följande om Smedebys komposition för 8 violiner och 4 viola;

Beispiel 6 – von Sune Smedeby – stellt einen Zwölfton-Komplex mit internen Verände- rungen dar; dieser Komplex ist jedoch kein Cluster: die Tonhöhen sind nach einem bestimmten Struktur-Plan übereinandergeschichtet. Die Tonhöhen-, Zeit- und Klangfarben-Notation ist aus der Partitur ohne weiteres zu verstehen, ebenso die dynamische Notation, wobei die Dick eder einzelnen Figuren die Intensität anzeigt.

Ligeti, 1968, sid. 27

(Min översättning: “Exempel 6 – av Sune Smedeby – utgörs av en tolvtonsgrupp, dock inget cluster, som är satt i en inre förändring. Tonhöjderna följer en förutbestämd skiktad ordning. Notation av tonhöjder, klangfärg och tid är lätt att förstå utifrån partituret vilket också gäller den dynamiska notationen som med den enskilda figurens omfattning visar intensitet.”)

Smedeby var en ivrig beundrare av tidig jazzmusik och framhöll ofta Bix Beiderbecke som en av de riktigt stora jazzmusikerna. Någon gång berättade han också om sin ungdoms jazzspel på tuba och att han varit musikanförare i världens äldsta studentorkester, Hornboskapen, vid Södermanlands-Nerikes nation i Uppsala. Möjligheten att föra in improvisatoriska element i musiken var ständigt närvarande i både undervisning och i hans kompositioner; alltifrån generalbas till fri improvisation. Sommaren 1974 hade han deltagit med ett seminarium på Ung Nordisk Musikfest i Piteå om att styra improvisation med hjälp av en improvisationsmaskin som han hade fått idén till. Förutom att leda detta seminarium hade han också fått i uppdrag att sitta med i den svenska juryn tillsammans med Jan W. Morthenson och Georg Riedel.

De improvisatoriska elementen, som han återkommer till i flera av sina kompositioner, vittnar inte bara om intresse för händelser direkt i stunden utan också hur formdelar kan improviseras och blandas. När jag 1983 fick möjlighet att studera in hans XXI miniatyrer för klaverinstrument jobbade vi intensivt med att ta fram en form som det skulle vara möjligt att improvisera över. Kompositionen är ett sällsynt sammelsurium av korta musikexempel på västerländsk musik i varierande stilarter. De flesta av styckena är utformade som hyllningar till äldre tonsättare och musiker, från Pythagoras till Webern. Oftast rör det sig om citat ur något av deras mest kända verk, citat som omarbetats på skilda sätt, ibland med en vänligt satirisk udd. Det finns dessutom exempel på olika tidsstilar och kompositionsformer, från medeltidens organum (tomma klanger, parallella kvarter och kvinter) till vår tids clusterteknik (täta, skarpt skärande tonklungor).

När vi träffades sista gången hade han förlorat vikt och såg ynklig ut. Hans ansikte var gult och glåmigt med en myriad av tunna streck under ögonen. Man såg tydligt att han led. Man kan inte säga att han var lätt att komma i kontakt med och jag tror på ett vis att musiken alltid var så mycket igång inom honom, så att öppna en diskussion var svårt. Skvaller låg aldrig för honom. Här kom det dock av och till några små antydningar, som dock visade att det fanns ett glödande temperament så att det kunde slå gnistor omkring honom. Han hade en känsla för mig och min musik från begynnelsen som, när jag nu ser tillbaka, var mirakulös eftersom jag fick uppmuntran redan i starten. Det var flott! Och det har jag aldrig glömt – den formen av generositet. Att möta en sådan människa sätter enbart positiva spår. Jag glömmer aldrig det ögonblick när han tittade igenom min rytmiska fuga, vänder huvudet mod mig och säger: ”överraskande, även om det inte är någon fuga ”. Jag kan fortfarande minnas tonfallet, för då blev jag överraskad. Jag var mycket, mycket lycklig. Det var liksom att hitta hem – att bli accepterad för vad jag sysslade med. Han var en klarsynt person som jag tyckte mycket om.

Otryckta källor:

Brev från Sune Smedeby till artikelförfattaren 1991-04-13

Litteratur:

Bergendal, Göran (2001). 33 nya svenska komponister, Kungl. Musikaliska Akademiens Skriftserie nr. 94, sid. 170, Bo Ejeby förlag, Växjö.

Flodin, Anders (1986). Att vara tonsättare i Närke, problem – möjligheter – framtid, specialarbete vid Musikhögskolan i Örebro, sid. 24–25, Örebro.

Grundström, Harald, Smedeby, Sune (1963). Lapska sånger: Texter och melodier från svenska Lappland – II. Sånger från Arjeplog och Arvidsjaur. Skrifter utgivna genom Landsmåls- och Folkminnesarkivet i Uppsala, Ser. C:2II, Almqvist & Wiksells Boktryckeri Aktiebolag, Uppsala. http://www.divaportal.org/smash/get/diva2:1097322/FULLTEXT01.pdf [2021-06-22]

Hambræus, Bengt (1970). Om notskrifter: Paleografi-Tradition- Förnyelse, Publikationer utgivna av Kungl. Musikaliska Akademiens Skriftserie med Musikhögskolan, nr 6, sid. 46. AB Nordiska Musikförlaget, Stockholm.

Ligeti, György. Lutosławski, Witold. & Lidholm, Ingvar (1968). Three Aspects of New Music sid. 27, Nordiska Musikförlaget, Stockholm.

I: Nerikes Allehanda, sid. 16, 15 oktober, 1997.

 – Nerikes Allehanda, sid. 3, 14 mars, 1981.

I: Morgunblaðið, sid. 5, 15 augusti, 1974 https://timarit.is/page/1455051#page/n4/mode/2up [2021-06-22]

Smedeby, Sune (1978). Harmonilära: från treklang till nonackord, Eriksförlag, Stockholm.

Sohlmans musiklexikon (1979). Sohlmans Förlag AB, Stockholm.

Tonfallet nr. 7, sid. 9, 11 april, 1980.

Fonogram:

Kävesta kammarkör (HEJ LP-015)

Sune Smedeby – XXI miniatyrer för klaverinstrument. Sveriges Radio, P2, 21 maj kl. 21.15, 1983.

Elektroniska källor:

http://www.unm.se/archive/1970-79/UNM1974.pdf [2012-01-30]

http://ribexibalba.com/eyemusic/ [2012-01-30]

http://old.krutgubbarna.se/other/krigsman.pdf [2012-01-30]

https://www.svenskmusik.org/sv/s%C3%B6k?person=5332 [2021-06-22]

What happens to the words denominations when a composer turns to composing music in a language other than his or her native language?

by Anders Flodin

Language is an important part of musical composition, especially in the art of combining music and text. But what happens to the words denominations when a composer turns to composing music in a language other than his or her native language? How does the language affect the composition process? 

Velimir Khlebnikov was one of the foremost writers in Russian futurism, and he is included among the pioneers of modern literature. His poetry is a sound phe- nomenon. The language he used is music. The vowels are the strings of the alphabet, the consonants are the tonic forces of the spirit. The poetry moves within two major areas: the magically scientific language Zaum and the game of word riddles and Slavic phonemes (Zaum – is a neologism, coined by Aleksei Kruchenykh, that describes words or language possessing indeterminate meaning). Khlebnikov sought to produce an international, basic world language and not pure sound poems. His poetry has embedded Zaum parts while these are explained in normal language. The dialects, the language of the sectarians, the witches, the devils and the gods can be found here. It is the language of nature, animals and time. 

The Italian Filippo Tommasso Marinetti’s graphic poems, depicted by words of freedom and without the Greek grammar, circle around the beliefs of futurism in a world of technological advancement; the car, the telegraph and the war. The musical realization uses the futuristic principles of Luigi Russolo’s sound art, which also gave impetus to the latter musique concrète. Many of the futurists’ sound poems have deliberately been deprived of their semantic content and should rather be regarded as an expression of immediate and original human emotions. 

When Dmitri Shostakovich composed his song cycle “From Jewish Folk Poetry” he consulted his colleague Mieczysław Weinberg’s wife, Natalia, daughter to the famous and eminent Jewish actor Solomon Mikhoels. The poems Shostakovich intended to compose were Russian translations of poems in Yiddish. Shostakovich then learnt the articulation in the original texts and customized the composition into both Russian and Yiddish languages. During the Soviet era the attitude towards Yiddish was a very negative one, so to put these poems in the preface was impossible. Today there is one edition with the original text in Yiddish as well as in Russian. In Sokolov’s book Testimony: The Memoirs of Dmitri Shostakovich it is said that Anna Akhmatova expressed her displeasure over the “weak words” he used for his vocal cycle “From Jewish Folk Poetry”. Sokolov writes that Shostakovich did not want to discuss this with the famous poet but indeed he did not think she had understood the music in this case, or rather that “she didn’t understand how the music was connected to the word.”

With this as a background what is the difference not only between language and music but also a foreign language and music put together? Juan G. Roederer writes in his article Physical and Neuropsychological Foundations of Music

With language, cortical areas emerged specializing in linguistic information processing. Language per se is of course, a learned ability: it is not inborn. What is inborn are the neural networks capable of handling this task and the motivational drive to acquire language […] Could it be that inborn in humans is a genetic motivation to train the language-handling network in the processing of simple, organized, but otherwise biologically irrelevant sound patterns – as they indeed occur in music?

Roederer, 1982, pp. 41-45. 

When I compose music with a text in other than my native language I ask myself: Why do I use a foreign language as a soundboard for my composition? When I look back on my process I can see a pattern which can be separated in two main fields: 

– I would like to adopt a new culture
– The sound of the new language is appealing to me 

References 

Baumgarth, Christa (1966). Geschichte des Futurismus. Hamburg: Rowohlt. pp. 88-91.

Dempsey, Christopher (2009). On Zaum and its use in Victory Over the Sun. In Essays on Victory Over the Sun, Volume 2, ed. Patricia Railing, pp. 57-65. East Sussex: Artists Bookworks.

Rikskonserter (1983). Musik i vår tid ‘83 EXVOCO. Stockholm.

Rikskonserter (1986). Musik i vår tid ‘86 EXVOCO. Stockholm.

Roederer, Juan G. (1982). Physical and Neuropsychological Foun- dations of Music. In Music, Mind, and Brain, ed. Manfred Clynes, pp. 41-45. New York: Plenum Press.

Shostakovich, Dmitry (1982). Collected works, volume thirty-one, Romances and songs for voices and Orchestra. Moscow: State pub- lishers “Music”. pp. 104-176.

Steiner, Evgeny (2009). On Zaum and its use in Victory Over the Sun. In Essays on Victory Over the Sun, Volume 1, ed. Patricia Railing, pp. 153-154. East Sussex: Artists Bookworks.

Technical Manifesto of Futurist Literature. https://www.wdl.org/ en/item/20031/view/1/1/ [20190131]

Volkov, Solomon (1979). Testimony: The Memoirs of Dmitri Shostakovich. New York: Harper & Row. pp. 273-274.

Wilson, Elisabeth (2006). Shostakovich: A Life Remembered. London: Faber and Faber. pp. 260-272.

Numeral and Symbolic representation when coding in Estuary: Browser-based Collaborative Live Coding

by Anders Flodin

Abstract. This paper focuses on the laboratory work in the art of live coding and in the use of Estuary, a browser-based collaborative projectional editing environment built on top of the TidalCycles language for the live coding of musical pattern. The paper explore the manner in which notation with numerals and symbols is encoded, processed and executed are examined with the aim of identifying the perceptual and practical boundaries of presenting notation on screen. The proto-compositions used in the article are composed by the author, Barry Wan – PhD-student in Visual Communication at University of Jan Evangelista in Ustí nad Labem, Czech Republic and Fabrizio Rossi – Diploma Accademico di Secondo Livello in Composizione, Conservatorio Statale di Musica “Alfredo Casella” – L’Aquila, Italy. 

Keywords: collaboration; live coding; composition; musical form; Sonology

Introduction

A number of years ago, I read a speech of thanks by Karlheinz Stockhausen after receiving the Cologne Culture Prize in 1996, which aroused my curiosity. In one of the seven sections that the text deals with, Stockhausen describes the development of electroacoustic music and digital technology and places the composer rather as a director without dependence on an interpreter:

Überlegen Sie einmal, was es historisch bedeutet, daß zum ersten Mal in der Geschichte ein Komponist nicht einfach sagenkann: “Hier ist meine Partitur – sehen Sie, wie Sie damit zurechtkommen. Sie sind der Interpret, Sie sind ja intelligent, es kann auch ruhig die eine oder andere Interpretation überdauern, bis Sie das mal richtig spielen können und keine Fehler mehr drin sind. Ich nehme das in Kauf, denn die Zukunft wird es irgenwan bringen; wenn ich berühmt bin, wird das schon von selbst kommen.” Solch eine Argumentation ist heute Selbstbetrug.

Stockhausen, 1996, p. 224.

(My translation: “Think about what it means historically that for the first time in history a composer cannot simply say, ‘Here is my score – see how you deal with it. You are the interpreter, you are intelligent, one or the other interpretation can easily survive until you can play it right and there are no more mistakes. I accept that because the future will bring it somehow; if I am famous, it will come by itself. ‘ Such argumentation today is self-deception”)

What I think is the spirit in the speech is that the new technology makes it possible for the composer to be eternal where the compositions are preserved in a digital technology and without an interpretation, a composer does not necessarily have to be not only a composer but also a musician and performer to his or her own music. But is it that simple? In this article I want to investigate how a coordination of a musical material – in the form of a score – is put together in a new musical context.

Information the Live coder communicate through Live coding

Composing music on a paper or with help of  a computer does not mean that all other music is disorganized. Most of the music created throughout history has been added without these aids and passed on in an oral tradition. But if there is a need to preserve or organize music in one form or another, one encounters early in the history of music both numerals and symbols to illustrate the musical course and in a score. One example is the so called figured bass, a bass part intended primarily for a keyboard instrument with arabic figures indicate the harmonies to be played above it. The system figured bass originated at the beginning of the 17th century and was universally employed until about the middle of the 18th century. It was designed to facilitate the accompaniment of one or more solo voices or instruments. Practice was not always consistent, but the following principles were generally observed: a note without figures implies the fifth and the third above in the given key. An accidental without any figure refers to the third of the chord. Provided that he or she uses the correct harmony, the performer is free to dispose the notes of the chord as he/she likes, i.e. close together or widely spaced. Since the practice of playing from figured bass is now no longer widely cultivated, modern editions of old music generally include a fully written out part of the harpsichord, piano or organ. No written part, however, can be a completely adequate substitute for ’realization’ at the keyboard of the composer’s shorthand; and many written parts of this kind do positive harm by neglecting to observe the conventions of the period. 

The notation of the Chinese traditional instrument Qin is based on a system where the forms of abbreviation for the right and left hand strikes are combined with numbers for the seven strings and the thirteen hui – the place where the natural harmonics are produced. Together, all this information forms a symbol, a graphic figure reminiscent of a Chinese character. An experienced qin player can easily identify the significant units. When the qin player talks about different characters, they read out the parts one by one. In Western music latin numerals represent the chord whose root note is that scale degree. A traditional I-IV-V-I cadenza is understandable as content for a musician studying Western music.   When coding music two questions arise. Is there a compositional grammar that is common between the written transmitted musical tradition in the form of changes, adaptations and variations and the planned compositional idea presented as a document? and if there are differences, what do they both have in common? The composer and music researcher Fred Lerdahl distinguishes between natural and artificial compositional grammar. The natural compositional grammar is the one in which contemporary musicians and listeners can intuitively orient themselves and which is shared by the members of a musical culture. Composers like to explore the boundary between natural and artificial compositional grammar. By “stretching out” contemporary music theory, an artificial structure is created. Skilled musicians also explore this area as they perform the new music. Through this interplay, the boundary between natural and artificial slowly shifts:

Where does a compositional grammar come from? The answer varies, but a few generalizations may be helpful. Let us distinguish between “natural” and an “artificial” compositional grammar. A natural grammar arises spontaneously in a musical culture. An artificial grammar is the conscious invention of an individual or group within a culture. The two mix fruitfully in a complex and longlived musical culture such as that of Western tonality. A natural grammar will dominate in a culture emphasizing improvisation and encouraging active participation of the community in all the varieties of musical behaviour. An artificial grammar will tend to do dominate in a culture that utilizes musical notation, that is self-conscious, and that separates musical activity into composer, performer, and listener.
The gap between compositional and listening grammars arises only when the compositional grammar is “artificial”, when there is a split between production and consumption. Such a gap, incidentally, cannot arise so easily in human language. People must communicate; a member of a culture must master a linguistic grammar common to both speaking and hearing. But music has primarily an aesthetic function and need not communicate its specified structure. Hidden musical organizations can and do appear. A natural compositional grammar depends on the listening grammar as a source. Otherwise the various musical functions could not evolve in such a spontaneous and unified fashion. An artificial compositional grammar, on the other hand, can have a variety of sources – metaphysical, numerical, historical, or whatever. It can be desirable for an artificial grammar to grow out of a natural grammar; think, for example, on the salutary role that Fux (1725) played in the history of tonality. The trouble starts only when the artificial grammar loses with the listening grammar.

Lerdahl, 1992, pp. 100-101.

Lerdahl points out the importance of the mix between the two grammars and that the trouble starts when the artificial grammar loses with the listening grammar. The terms natural and artificial from some kind of objective point of view are ill-advised, but that they still describe how contemporaries – to the composer and his music – experience the interface between playing inside the tradition and starting to wrestle with something new. And perhaps over time an intuitive understanding of the new, perhaps through hands and thought.

When using Live coding for creating a thought it is the praxis to program with numeral(s) or a symbol(s) representing a sound in time and space to obtain the instruction and function:

Live coders program in conversation with their machine, dynamically adding instructions and functions to running programs. Here there is no distinction between creating and running a piece of software – its execution is controlled through edits to its source code. Live coding has recently become popular in performance, where software is written before an audience in order to generate music and video for them to enjoy.

McLean, A. Griffiths, D. https://www.gold.ac.uk/calendar/?id=2222 [20210219]

An attempt to systematize the different types of notation in general can be made in two categories: action-based notation and result-based notation. The action-based notation can include e.g. placements of the fingers or various tablature, or impulses to the performer to shape a course that has been outlined by a figured bass or with different graphic curves. The second category, result-based notation, refers to all notation of notes where one can more or less imagine a sounding result without having to be familiar with the special peculiarities of different instruments. The emphasis is on the descriptive function of the character material, where we can calculate our conventional notation on notation systems with all different variants. In the traditional notation the two functions are combined. Where, then, is Live coding as notation in this systematization? Since it is a notation with symbols and numerals, it would be categorized under action-based notation because it is in many ways a tablature.

Materials and Methods 

On the website and the entrance to Estuary, the open source software is described as follows:

Estuary is a platform for collaboration and learning through live coding. It enables you to experiment with sound, music, and visuals in a web browser. Estuary brings together a curated collection of live coding languages in a single environment, without the requirement to install software (other than a web browser), and with support for networked ensembles (whether in the same room or distributed around the world). Estuary is free and open source software, released under the terms of the GNU Public License (version 3). Some of the live coding languages available within Estuary are TidalCycles, for making patterns of musical events, and Punctual, for synthesizing audio and/or video from the same notation.

Estuary: https://estuary.mcmaster.ca/ [20210216]

In the autumn of 2020, a group consisting of Barry Wan (HK), Fabrizio Rossi (IT) and Anders Flodin (SE) conducted some laboratory work on what happens when coding is preceded by an imaginary plan or an elaborate musical form. The purpose of the study was to map what attachment we in the group had to the musical tradition and what factors influence our choices. The participants gave themselves the task of preparing instructions so that each participant could have their own task in the composition as a whole. It is noteworthy that the task did not force the participants to compose with numerals and symbols. All participants were asked to write down their thoughts, ideas and experiences in a log book that also provided a basis for the laboratory’s textual presentation below. 

The composers also agreed to limit the playing time to five minutes, which the group later abandoned to instead double the playing time to ten minutes – after a first test, we felt that the process became too short.

All participants had enough experience to be able to program in Estuary and with a good habit of reading traditional Western music. As an addition to the first three sessions, I have chosen to include a fourth session. This session was part of one of the university’s courses under the guidance of Barry Wan and which was streamed live.

Session 1 – Anders Flodin (Example 1) The graphic design of the composition is reminiscent of a traditional score with a given time axis in minutes and a given arrangement of the distribution of the various parts vertically. All players have well-defined codes that must be entered in the two boxes that each player has to complete. Each part ends with coding “silence”, which is the word that stops the sound process in Estuary. In the left edge there are three symbols under the heading Textures integral. The symbols are taken from a sonological conceptual apparatus and method of analysis and show with the help of geometric figures what kind of complexity is desirable. The symbols are designed based on the idea that the more corners in the geometric figure, the higher the degree of complexity: hexagon = very complex, square = relatively complex and circle = very simple. What is very complex, relatively complex or very simple is subjective and can be interpreted by the player according to his or her perception. I have previously tried the idea of activating the symbols into active action with mixed results depending on the habit and familiarity of improvising with the musicians. A few words must be said about Sonology because I turn the tool for analysis inside out to become active symbols for execution instead. The theory of sonology is mainly practical-pedagogical and it aims to develop a terminology where teachers, students, composers and practitioners can exchange opinions about music as a sounding phenomenon. The focus is mainly on sound rather than opus, phenomena rather than concepts. Central to Sonology is the understanding of sounds as phenomena and music as organized audible structures which later can be described by terminology and symbols.

Session 2 – Fabrizio Rossi (Example 2)

The composition is divided into a time axis left to right and into an arrangement of distribution of parts. The individual part is divided into two levels where it is implied that there are two boxes to be used by the single coder. The respective content of the parts differs from Example 1 and is not so informative in the execution of the details i.e. numerals and symbols. The type of sound from the Dirt-Sample (sample banks) from The Hacked TidalCycles Documentation can be read out in the parts, e.g. seawolf, coins-can, sax. The information also shows whether the sound should be in the foreground, background and rhythmic pattern . 

Fabrizio Rossi writes in his log book called Operative observations about composing through coding – for a controlled-alea co-improvisation with Estuary:

4) The functions could be defined in this way and with these Minitydals (These are hypothesis too):
a) Foreground: a predominant “audio element” with a significative sound or a sort of rhythmical character (sitar/industrial/hoover/koy/rave/ravemono/stab/subroc3d/toys)
b) Background: a not predominant “audio element” with a long-time sound, and with not too marked rhythmic character (seawolf/cosmicg/fire/pebbles) or without it (sax/ade/pad/padlong/prog/tacscan)
c) Rhythmic pattern: “audio element” made by a short time sound(s) with a clear and predominant rhythmic character; it could happen:
c1) in high frequencies (coins/bottle/can/psr)
c2) in low frequencies (bassfoo/909/arp/pluck)

Rossi, F. (2020).

Session 3 – Barry Wan (Example 3)

Barry Wan composed a graphic score with brush strokes of red, yellow and blue. No other information was given except that an oral information was given just before the coding session to the participants: “player one follow the red ”, “player two follow the blue”, “player three follow the yellow dotted line”.  The time setup was ten minutes. The composition is open and can be interpreted in a variety of ways and differs from the others because it does not describe sound or the course of events other than that the composer oraly distributed the parts to be followed in the form of color.

Session 4 – New Media Winter Semester Performance 2021 (Example 4)

This performance is with a group of Live coders and students from the Faculty of Art and Design at J. E. Purkyne University in Ústí nad Labem, Czech Republic, under the guidance of their tutor PhD-student Barry Wan. The group made an online event 19.00 ECT on the 5th February 2021. The text and the numbers must be understood and performed by the Live coder, which in the given structure is given opportunities to improvise a course and choice of sound type within certain given frames of content and form. The numbers in the left margin show minutes and correspond to the recorded material. It is noteworthy that the instructions or score, unlike the previous examples, are on the same web application down to the right.

Imaginary Regions

The live coder communicate a serie of numbers and symbols indicating specific and/or aleatoric material to be performed by him or her into the group of live coders. The live coder develops the responses of the numerals and symbols, molding and shaping them into the composition then create new numerals and symbols into another series of sounds, a phrase, and continues in this process of composing the piece. The live coder composes in real time utilizing the numerals and symbols to create the composition in any way they desire. The live coder sometimes knows what he/she will receive from the performers and sometimes does not know what he/she will receive – the elements of specificity and chance. The live coder composes with what happens in the moment, whether expected or not. The ability to compose with what happens in the moment, in real time, is what is required in order to attain a high level of fluency with the coding language. Three of the compositions (example one, two and four) are using “the arrow of time” as a compositional strategy and to articulate the musical form. The existence of an “arrow of time” — is a concept drawn from fundamental physics, first formulated in 1927 by the British astronomer Arthur Eddington. 

There are two regions the coder/performer utilizes when signing numerals and symbols:

(1) Region one: A place where the coder/performer indicates silence. It is where the coder/performer prepares the start, the phrase for initiation or ending. The same space are also where the written numbers and symbols for sound and phrases are initiated – the place of action.  

(2) Region two or the Chat function: A field in front of the coder/performer which allow the performers to communicate during the session. Short messages may include information about the changing compositional process and the typology of sound.

Conclusion and Future work 

The overview I have presented is, to refer to the proceeding headline, dizzingly complex: In an article such as this it is not possible to do more than to focus on one area, and then, finding it rather hard to take in the situation in the hope to finding it possible to sketch some contours. One conclusion is when live coding the coder/performer/composer can ultimately only deal with the whole and the experience of a session one would probably not focus on every single element in the music at any given time, e.g. timbre process or rhythmic articulation and manifestation. The focus will probably most likely shift over the course of the session. When studying live coding there are a lot of isolated elements as numerals and symbols. You can of course study them and observe the phenomenon of sound and particular elements in detail. But as soon as you want to make a valid statement of the nature of this element in the context of music, you have to place it back within a whole with intuition, listening for the right moment and to play and communicate with other coders or together with yourself in solo mode. You have to link the element back to the construction of music and look and listen at how it combines with all the other parts of a musical form. If it is desirable to maintain the requirement in the traditional sense that the notation should be easy to write down and easy to read and to be reproducible for others, one can probably fear that “musical graphics” can fulfill musical functions only as long as the contact between the coder and the performer is kept active and these together can establish certain conventions regarding reading. But another input can also be that graphic score, Example 3,  can be an opening to an improvisation based on an impression.

Appendix

Example 1: Anders Flodin, sketches for Study.

Example 2: Fabrizio Rossi, sketches for Study.

Example 3: Barry Wan, sketches for Study.

Example 4: Concerted composition.

References

Unpublished

Flodin, A. (2020). The Dictionary of Lost Symbols and Numbers, Log book, Autumn, 2020.

Rossi, F. (2020). Operative observations about composing through coding – for a controlled-alea co-improvisation with Estuary. Log book, November, 2020.   

Stewart, D. A. (2019). The Hacked TidalCycles Documentation (2019).     

Literature

Bálint, A. Varga. (1996). Conversations with Iannis Xenakis. London: Faber and Faber Limited. p. 205.

Flodin, A. (2015). Suona, testa, allucinazione, virus: I, II, III. In Piantologi. (ed.) Berggården, S. p. 25-28. Örebro universitet: Föreningen Musikspektra T.

Hambæus, B. (1970). Om notskrifter, Stockholm: Nordiska Musikförlaget. 

Karkoschka, E. (1966). Das Schriftbild der Neuen Musik, Celle: Hermann Moeck Verlag. pp. 167-173.

Lerdahl F. (1992). Cognitive Constraints on Compositional Systems. In Contemporary Music Review, 1992, Vol 6, Part 2, (ed.) Moraves P., pp. 97-121. UK: Harwood Academic Publishers GmbH.   

Lindqvist C. (2006). Qin, Stockholm: Albert Bonniers Förlag AB. pp. 240-252.

McLean, A., Fanfani, G., Harlizius-Klück, E. (2018). Cyclic Patterns of Movement Across Weaving, Epiplokē and Live Coding (Volume 10, Number 1). Dancecult: Journal of Electronic Music Culture.  https://dj.dancecult.net/index.php/dancecult/article/view/1036/941 [20210216]

de la Motte-Haber, H., Rilling, L., Schröder, J. H. (Hg.) (2011). Dokumente zur  Musik des 20. Jahrhunderts (Band 14, Teil 1), Regensburg: Laaber-Verlag. p. 275.

Smalley, D. (1997). Spectromorphology: explaining sound-shapes. In Organised sound, Volume 2, Issue 2, pp. 107-126. Cambridge University Press. 

Stockhausen, K. (1996). Sieben Punkte zum Kulturpreis Köln Dankeswort von Stockhausen anläßlich Verleihung des Kulturpreis Köln im Käthe Kollwitz-Museum am 4. November 1996. In Crosscurrents and Counterpoints. (eds.) Broman, P. F., Engebretsen N.A., Alphonce B., p. 224. Skrifter från avdelningen för musikvetenskap, nr. 51. Göteborg: Göteborgs universitet. 

Terhardt, E. (1982). Impact of computers on music – an outline. In Music, Mind, and Brain –  The Neuropsychology of Music. (ed.) Manfred Clynes, pp. 353-369. New York: Plenum Press.

Thoresen, L. (2012). Exosemantic Analysis Analysis Of Music-As-Heard. Paper presented at Proceedings of the Electroacoustic Music Studies Conference, Meaning and Meaningfullness in Electroacoustic Music, EMS, pp. 1-9. Stockholm, June 2012.  http://www.ems-network.org/IMG/pdf_EMS12_thoresen.pdf [20210217]  

Thoresen, L. (2007). Form-building transformations – an approach to the aural analysis of  emergent musical forms. The Journal of Music and Meaning. JMM 4, 2007, section 3. http://www.musicandmeaning.net/issues/showArticle.php?artID=4.3 [20210217]

Winckel, F. (1955). Klangstruktur der Musik – Neue Erkenntnisse musik-elektronischer Forschung, Verlag für Radio-Foto-Kinotechnik  gmbh, Berlin-Borsigwalde. p. 129. 

Xambó, A., Freeman, J., Magerko, B., Shah, P., (2016). Challenges and New Directions for Collaborative Live Coding in the Classroom. In Proceedings of the International  Conference on Live Interfaces (ICLI 2016). pp. 65-73. Brighton, UK.  http://annaxambo.me/pub/Xambo_et_al_2016_Collaborative_live_coding.pdf  [20210411]

Electronic links

Estuary: https://estuary.mcmaster.ca/ [20210216]

McLean A., Introduction to Live Coding and Visuals https://www.youtube.com/watch?v=-QY2x6aZzqc [20210221]

New Media Winter Semester Performance 2021 https://www.youtube.com/watch?v=TQwvVk69sSs&t=188s [20210311]