Voicing code
(Blog – Part 1)

This is a project by two musicians, Eimear O’Donovan as vocalist and Alex McLean as live coder. Here’s some thoughts we wrote while starting to collaborate during our residency.

ALEX

Example of Alex live coding music, here in an audio/visual collaboration with live coding legend hellocatfood

Bringing live coding + singing together opens a can of worms. We all know what a voice is but what is code? It is written in a kind of language, but has no spoken form, and so no prosody. Live coders will say they express themselves with their code but this is nothing like the expression of the voice. So while we think of code as being technologically advanced, it is nowhere near as advanced as the voice, which has evolved over millions of years, since way back before there was even a difference between language and music.

Despite generally being thought of as being invented in the last 50 years or so, code does however have a heritage that extends back millennia. This history is partly about the development of discrete mathematics across the world but discrete mathematics is the science of pattern, so there is a perhaps more culturally grounded history in textiles. Low-level machine code is mostly about textile-like pattern transformation – shifting, reversing, combining and so on, the same operations you see on a handloom (let’s leave Babbage and Lovelace aside – all weaving is digital and computational).

So bringing the voice and live coding together brings together the two cultural threads of song and pattern. Actually these two threads are rarely seen apart, for example textile workers feeling their way through a complex pattern will often sing it, to help their memories and bring life to repetitive processes which might otherwise drive them mad. So worksong could be an inspiration for this and earlier projects that combine patterns with music (e.g. David Littler’s Sampler Cultureclash).

So there’s some context and inspirations for this collaboration but also some difficulties. Singing is a very direct form of expression, you open your mouth and sing. Live coders sit a little bit distant from their sound – they can add controllers to their interface but fundamentally they work with code, which is not about making a gesture to make a sound, but to press a load of keys on a keyboard, making some algorithm to generate many sounds (e.g. a rhythm that might evolve over a minute or more). So a singer + coder have different relationships with sound, but still we’re on the same timeline together, listening to each other on the same level.

EIMEAR

Example of Eimear’s previous vocal work – songwriting collaboration with Irish producer Bantum

One of the primary aims of this research has been to figure out how to collaborate using voice and code – the technical challenges of sampling and looping live, using code. From the beginning, an important goal for both of us has been that this would be an equal collaboration – not merely Alex having access to my vocalisations to manipulate, i.e., the voice is the input (Eimear) which is manipulated by a process (Alex). We wanted to figure out a way of working together, and a shared technical set-up, which allows both of us to have input and processing access.

We didn’t talk a lot about musical influences, or what we wanted it to ‘sound like.’ I think we were both coming at it with open minds, just wanting to make music intuitively together and see what comes up.

As the vocalist, I had reservations about tying the project down to any predetermined lyrics, unless the words were randomly chosen through some process. This is so far undefined for this project. One possibility is borrowing sounds or words from languages other than English. I am Irish, so drawing from the Irish language is one option I am exploring.

This digital residency has all been occurring during COVID-19, so our plans to meet up and spend time working together has been curtailed by our own safety concerns as well as local restrictions.

The in-person sessions we have done so far were organised with safety in mind, with both of us wearing masks, windows and door open, and a fan going for increased ventilation. As an extra precaution, we worked back-to-back, particularly so that I wasn’t singing in Alex’s direction.

Something about this set-up, with both of us almost privately or separately working on our individual part, while not looking at each other, felt very poignant and relevant to what we were doing. I think as a singer, it made me a little less self-conscious, and more able to find a flow-state to my vocalisations.

Another ongoing piece to the research is developing or narrowing down the set of kit that we work with. I knew from the start that I didn’t want to just be on vocals, and so far I have experimented with drum machine and synthesiser while Alex is using TidalCycles for coding. It has been invaluable to have this time to get familiar with hardware that is new to me. We both have access to a digital mixer using remote software, so that allows us equal control, and are talking about how to take this further.

ALEX AGAIN

Yes I think a big thing for us is making it an actual collaboration. This is a challenge because I do everything via a computer, and while computers are great at making noise, they’re rubbish at listening to other people’s noise. So where a band of real humans might listen to each other to establish a tempo and stay in time with each other, a computer isn’t so good at doing that. We can work around all these things but to do that we have to stay aware that we’re working against computers which despite what the software adverts tell you, are way better at controlling than being controlled.

So I think one decision we both agree on is that our collaboration is heavily based on listening to each other. But we’re also here for the technical dimension of manipulating each other’s sound. On one side I’m sampling Eimear’s voice and reworking it, and we’re also working out ways in which Eimear can manipulate my code and sound through controllers.

On my side I’m interested in how live coding relates to ‘live sampling’ and how it can offer an alternative to ‘live looping’. I mean I do like live looping performances a lot, like the great work of e.g. Kawehi – an awesome approach to really live music. But I think there’s potential in live coding to go in a different direction, and manipulate time in patterned ways that include looping but also reflection, rotation, juxtaposition, interference and so on. The TidalCycles live coding environment I’ve made is al about manipulating time as this kind of ‘algorithmic pattern’ so I’m really curious about how this can be applied to the voice. One problem is that with Eimear’s voice, I can’t pattern what’s happening now against what will happen in the future (damn causality).. Ah well

I guess what might happen is that Eimear ends up singing into a weird algorithmic echo unit, that I’m making and modifying on-the-fly. That could be fun but I’m also keen to explore what happens with mixing in some sound analysis, so I can e.g. reorganise ‘grains’ the sound that Eimear’s made over the past couple of seconds in order of pitch or brightness, and mess with that. Based on previous experience what might well end up the best approach will be the simplest, but I think we just want to try everything and see how it goes.

“Excerpt from exploratory first session in Sheffield”

Voicing code
(Blog – Part 2)

See video info for click-through timestamps of various experiments

Technical / social reflections

Alex – Our collaboration is based on technology, which some people might be wary of. It’s like there’s a fear of projects which take technology as a starting point, rather than pure ideas. I’m a firm believer that pure ideas don’t exist though (maybe this comes from my gnostic atheism..), and that it’s really important to work with ideas as living material, whether you’re working with clay, or with drum machines, microphones or code. As well as acting like material, technology can also act as a kind of social space, or part of a social space..

Eimear – As this collaboration has developed, we’ve played around with ways to lay out the room to enhance and aid our devising process. As mentioned in the previous blog, we’ve been working back-to-back while wearing masks, meaning that a lot of the physical, facial and otherwise non-verbal cues that collaborating musicians would usually rely on are not available to us. Alex came up with the idea of using webcams so that we could signal to each other if we wanted to. We projected our “video call” onto the wall between us, which also meant Alex was able to project the code, as is typical for live coding performances.

As the vocalist in the project, I found this really helpful. Although I do not live code (yet) and can’t “read” the code exactly, it was really engaging to be able to see Alex’s coding process laid out and get some idea of what my vocalisings were prompting, as it happened.

Alex – Yes it worked well! I could see you on my screen, normally when I’m live coding I’ll get too ‘in the zone’ to look at collaborators enough and having you on screen actually helped a lot.

I’d also like to talk about the live coding environment TidalCycles / SuperDirt. I originally made TidalCycles (or just Tidal for short) to make my own music, but shared it online as free/open source software. I originally released it with a sampler called Dirt based on the work of Adrian Ward, and later another friend Julian Rohrhuber reimplemented it as a hybrid sampler/synthesiser called SuperDirt, using the amazing SuperCollider environment (which is also free/open source).

I was intending on doing some work on SuperDirt as part of this residency, to develop a way of working with Eimear’s live sound. With perfect timing, a great guy Thomas Grund in Leipzig released a really great add-on for SuperDirt called tidal-looper, which enhances Tidal with a really flexible way to do work with live sampling. This is really perfect for our needs, making it easy for me to create algorithmic patterns from a set of samples continually taken from Eimear’s voice.

This is a great example of how sharing free/open source software can be a really nice experience; you share this thing more or less unconditionally, and you get all this amazing stuff back. It is a real privilege, not everyone has the time to work on stuff for free, but because it’s free/open source they don’t have to. (Plus at this point I do get a lot of financial support from tidal users from pay-as-you-feel subscriptions to my online course – although I’ve just made the first four weeks of that fully open access – and donations.)

I will release a new version of TidalCycles soon, during this residency – just tidying up some stuff. One feature that will be useful for our project is the ability to pattern effects independently from triggering sounds. This has always been possible to some extent but this new change will make it much easier to for example trigger a sound and pattern the distortion of it with some algorithms, while the sounds is playing. A lot of electronic and particularly algorithmic music is really based around note trigger messages, where you make all the decisions about what a sound is going to do at the point that you trigger it, and then let it play out without being about to change that decision. Either that or you control ‘global’ effects, which change how everything sounds, rather than controlling the effect of a single sound. It’s a bit surprising that this weird constraint is built into the MIDI standard which most music tech is based on! It creates a particular aesthetic that I like, but it’ll be interesting to explore a more fluid approach to manipulating effects in our collaboration.

Eimear – Still to come from this collaboration is, as mentioned by Alex, a new version of TidalCycles software, and a new piece to be premiered in January. We are so grateful to Iklectik for giving us this opportunity, and look forward to sharing our work with you in the coming weeks.