News, reviews & reflection on the Darmstadt Summer Course 2023
made by students in the “Words on Music” course

Improvisation, artificial intelligence and imagination: can they go on meeting like this?

If you thought musical instruments are only used for music, you might be out of tune. In Language Music by Anthony Braxton, the trumpet is used for lip massage, singers become howling wolves and screaming gulls, and we hear smashing lids in a ship’s galley. If you thought a double bass only plays obligatory bass lines, now know that the instrument lends itself perfectly to percussion. Why not? Need for a score? Well, the musicians play without a score, and only on a handful of stands are there sheets with a few notes. 

In sum, they are improvising. Improvisation is probably the most personal and direct form of interactive musical communication, all in real-time. The question that may arise is whether, for example, Artificial Intelligence (A.I.) can be used to stretch the technical and creative possibilities to improvise. Can A.I. and machine-learning create other, better improvisations? What if we take a closer look at the behaviour of the human body? Could that help to improve our listening experience? By the way, what are we actually listening to? Is it still real or do we have to get used to hyperreality?

Let’s first of all take a look at improvisation as we know it: which is to say, human-made. 

A performance by the Prague Music Performance Orchestra, as pre-opening of the Darmstadt Summer Course for New Music 2023, showed how improvised music lets the musicians do what they should do: make music in the here and now, together, with childlike open-mindedness, just going for it. Composer Braxton has chosen a lesser known role in the creative process. He has restricted himself to developing a limited number of sound registers, which conductor Roland Dahinden can use more or less unhindered. The Lego bricks Braxton provides, as if it were a system, are shuffled back and forth by Dahinden at random, and certain gestures and codes have been agreed with the orchestra. As a result, the conductor communicates by a funny-looking sign language. He raises fingers, uses tickling movements, makes cutting gestures with his arms, or groping rustles with his fingers.

This form of close co-operation between the composer and the conductor is known as a co-composition process. It is a drastic increase in the space musicians may use to showcase their talent and virtuosity. The collaboration between the composer and the conductor goes beyond exchanges of interpretations followed by compromises. The division of labour used leaves every part of the creative process intact. Each performance of Language Music, which requires 6-8 hours of rehearsal time, is re-invented, is fresh and sparkling. Its audience in Darmstadt, curious by definition and unafraid of being pulled out of its comfort zone, received Language Music as refreshing by its splashing showers of spontaneity. It showed the power and uniqueness of improvisation, when played by top class musicians.

Artificial Intelligence

A.I. is not new to musical improvisation. A starting point is the composition Voyager (1985) by the US trombonist-composer George Lewis. Voyager is a computer-driven, interactive “virtual improvising pianist”. It listens, for example, to a pianist who improvises in real time. The program lets then a computer-steered piano react to the live piano player and the program’s internal processes. Notions about the nature and function of music are embedded in the structure of software-based music systems. Of course, interactions with these systems tend to reveal characteristics of the community of thought and culture that produced them. For example, Voyager is directly influenced by Afro-American influences that emerge in the dataset that was used. 

Thus, Voyager is considered as a kind of computer music-making, embodying Afro-American aesthetics and musical practices, such as colour, texture and rhythms. This suggests that a universal application of A.I. systems is currently non-existent because the datasets underlying the running of the software are aesthetically and culturally influenced. Until now, all systems have been different from each other. Either they are very personal, customised but far from independent. Voyager runs on a dataset that limits itself to George Lewis’s personal choice, and on the personal consent of the piano player. This is relatively exceptional. Other systems, such as MusicLM, are trained on datasets based on thousands of hours of audio on the internet and are influenced by traditional Western ways of organising compositions.  Also, they do not worry about consent of the musicians and composers. 

Another concern is that most systems do not look much further than their nose is long – which is to say, up to the next note and no further. Of concepts, movements, musical phrases, A.I. has a very limited understanding yet. As it stands, for instance, it is not possible to teach a computer to knit a decent ending to a piece of music for the simple reason that the concept of ending is undefined. Many human-crafted pieces end in a cadenza but others simply stop. The possibilities are endless, but infinity won’t get a computer very far. The only way to stop a computer-controlled composition is to pull the plug. 

Improving the listening experience

Using computer-based systems to learn more about the behaviour of the human body has been in development for some time. A relatively young project in the context of human-machine interaction is MetaPhase, a research project that focuses on new technological possibilities regarding our listening experience. The project, run by the Italian pianist Giusy Caruso of the University of Antwerp, focuses on interaction between human and virtual musicians. They interact in a new world encompassing both real and virtual space. 

The research was driven to capture the creative potential of data processing, human-machine interaction and biotechnological applications. The aim was to enhance the performers’ expressiveness and the audience’s listening experience. In the toolbox is a portable system, developed by Italian IT company LWT3, that collects biosignals. The system maps gestures and provides quantitative data on displacement, acceleration and speed of movements. In this way, musicians obtain valuable information about their physical approach when playing an instrument. As a result, the musician knows better where opportunities for improvement present themselves.

Avatar in live performance

Based on musicians’ movements, the system also creates a virtual agent (usually an avatar) that moves on stage. The musicians are fitted with reflective markers on their bodies and a biosensor that, via infrared cameras, capture the gestures and associated muscle effort in real time. See how this works: https://www.youtube.com/watch?v=SoaKJUPq36Q.

From this perspective of visualising bodily measurements, Giusy Caruso and LWT-3 developed the idea of creating digitised performances as well. For example, she plays Steve Reich’s Piano Phase for two pianos by interacting with another avatar pianist who plays the first part of the piece. This second virtual human is animated by the expressive movement of the real pianist previously recorded along with an audio track on a Disklavier piano.

The project highlights that we are in an era where there is an increasing focus on hybrid approaches (physical+digital) in hybrid spaces (real+virtual). Against their better judgment, people have become accustomed to living in virtual environments and communicating via smartphones or laptops. With the development of video projections, people can now also create avatar interactions in the ‘metaverse’ (a virtual world with avatars and tokens). This metaverse is only at the beginning of a development that still has many mysteries and uncertainties but is developing step by step. 

What is real and what is fake: Hide to show

With the “scenic composition” Hide to Show, the German composer Michael Beil and the Nadar Ensemble (Belgium) explore the possibilities and limitations of digital communication and hyperreality in music. Hyperreality is the state in which what is real and what is fiction are mixed to the point where it is no longer clear where one ends and where the other begins, i.e. what is real and what is fake?

In Hide to Show, Beil both shows and questions the dazzling world of the internet: memes, covers, parodies and deepfakes. Listeners and spectators are the same.

The musicians hide behind costumes, curtains that rise and fall and recordings of themselves, so that the real people remain invisible, just like in social media. Hidden behind a profile picture or avatar, everyone can appear as an individual in the virtual world without ever really showing themselves. This is also the case in Hide to Show: when the musicians are alone in their dressing room, the audience overlooks their perfectly synchronised networking.

https://www.youtube.com/watch?v=scSVqg_PviI

The audience can see how each musician operates a blind, alternately turning the slats to give a peek into their room and opening the slats completely. The staging bears traces of the Corona epidemic in which we were grappled with the paradox of enforced isolation with the need for teamwork via Zoom. In their article “Hide to show: ‘memefying’ live music”, researchers Pascal Gielen and Thomas Moore (University of Antwerp) reflect on questions such as: what happens to artistic practice and teamwork when performers cannot see each other, what does communication mean when musicians can only hear each other through headphones and are physically isolated? The ‘sound’ and especially the ‘feel’ of live music becomes increasingly ambiguous and unpredictable when reality is simulated, Gielen and Moore conclude.

However, Beil does not only explore the boundaries of digital communication and hyper-reality. The composition especially responds to internet culture. Hide to Show is structured as a series of short, repetitive, fragments, i.e. memes, that multiply and mutate online. One example is his reference to Japanese internet idol Hatsune Miku who became immensely popular with her version of the folk tune Ievan Polkka, originating in Russia and travelling through Scandinavia and Europe to Japan where sound engineers in Sapporo put a synthetic, almost perfect voice underneath. The end result is a song developed as an endless series of copies of copies without a trace of the original. Above all, Beil plays with immediacy, connected to the internet culture of the here-and-now, looking out wide rather than in depth. 

Beil’s repetitive use of live video, lets the audience know that what they are seeing is really live – it is real and not pre-recorded. In addition, Hide to Show manages to create tension, and therefore life, in a new way. There are still real live bodies on stage. It’s not nothing to master all six parts of six scenes each, including dances in each part.  The premiere required almost three working weeks of rehearsal time. In the process, the musicians were harshly pulled out of their comfort zone. Many musicians have a problem physically detaching themselves from their instrument, especially if this involves dance steps. Moreover, the piece is played by heart so there is no score to serve as a safe anchor. Although Beil applies hyper-real principles of technological media mediation, it remains a live concert, with the stakes being the desperation and uncertainty of the audience who can automatically throw back their own unique humanity: imagination.

Das Feingefühl 

Klankforum Wien’s interpretation of Situations by Georges Aperghis shows that the listener’s imagination need not necessarily be stimulated by comet-like tempi or popping light shows but can also be triggered by politeness and subtlety. Brass players putting on extra-heavy dampers to offer hospitality to woodwinds, giving a starring role to the double bass, one of the two pianists demanding velvet trills from the violinists, playing pizzicati without tone, a pianist playing the strings of his instrument almost erotically with his fingernails and then also saying out loud, “Das Verschwiegende ist das besser Bekannte”, two bass clarinets playing so softly that they almost become drones. With that comes a pleasant kind of modesty by keeping the instrumentation as economical as possible so that the nuances can be fully appreciated. “Wie haben Sie das Feingefühl Ihre Finger bekommen?” the pianist asks to nobody in particular. And then it stops just like a computer would be doing.

The work of George Lewis, Giusy Caruso and Michael Beil focus on A.I. technology and machine-learning in the framework of live-performance and spontaneity. On the other hand, the financial requirements for technologically powerful projects remain dependent on big tech organizations and companies who happen to embrace a specific, Western way of thinking about music. Moreover, intellectual and artistical property rights are in most countries limited to creations that are produced by humans. Anything on the internet is up for grabs. So as long as we do not exactly know what we politically and culturally want from A.I., we will keep on wandering about. 

But spectators and listeners are human, are live, they laugh, cough, applaud, booh and smile. There is still enough room between the zeros and ones of the digital spectrum to fertilize our imagination.  As long as artificial intelligence and artisanal human Feingefühl give each other room to breathe and make an effort to respect each other’s cultural and aesthetic values, we might as well love to go on meeting like this. 

Language Music (Anthony Braxton) – heard on 5 August 2023 in Darmstadt

Hide to Show (Michael Beil) – next show on 20 October 2023 at the Transit Festival in Leuven

Situations (Georges Aperghis) – heard on 5 August 2023 in Darmstadt

Design a site like this with WordPress.com
Get started