Skip to main content
SearchLoginLogin or Signup

Uncanny

The audio-visual performance titled Uncanny combines AI-driven sound synthesis and image generation methods with live coding techniques.

Published onAug 29, 2023
Uncanny

The performance titled Uncanny combines AI-driven sound synthesis methods with live coding techniques. Various Artificial Intelligence models including Melody RNN, DDSP, GanSynth, and GANSpaceSynth have been implemented within the scope of the ‘Deep Learning with Audio’ course by Koray Tahiroğlu at Aalto University, hence possible applications using the above-mentioned algorithms in combination with live coding tools have been researched by Begüm Çelik. Amongst these models, GANSynth and GANSpaceSynth have been chosen to be integrated into the proposed live coding performance as well as in the provided sample demo. The main reason behind this combination is not solely caused by the characteristics of the generated audio output, but also the underlying mechanisms of the chosen AI models.

As the name suggests, GANSynth is an AI-driven high-fidelity audio synthesis algorithm based upon Generative Adversarial Networks (GANs). Predecessors of GANSynth were using autoregressive models like WaveNet, yet GANSynth achieved better audio quality in a much shorter time. GANSynth models do not directly provide encoded latent space representation of the sound sample they are fed by, instead offer a latent space of timbres that can be explored largely in a random manner. The process of generating such latent space might be thought of as a black-box system, the networks learn from the sound dataset how to arrange this multi-dimensional space, that is not understandable yet open for exploration. Since the process of acquiring sound in this latent space has been envisioned as if we are wandering around or beaming up from one point to another in the fictive multi-dimensional space of sounds, it has been found inspiring within the concept of soundscapes. Artists believe that this aspect of these AI algorithms makes them settle on the common conceptual ground with soundscapes referencing spatiality. Thinking of the latent space as a multi-dimensional universe of sounds waiting to be heard encourages the artists to explore what could be hidden around the corner by navigating more or less in a random fashion, in accordance with the aleatoric orientation of live coding. The constant flux and adaptability of live coding, which aids in shaping the direction of a performance, also resonate with improvised music's emphasis on spontaneity and unexpected creativity. This exploration of the latent space not only results in diverse and unique musical experiences but also underlines the flexibility and dynamic nature of live coding. It reaffirms that live coding is not confined to any particular style or genre. Instead, it can be an instrumental force in creating a broad spectrum of music, from structured compositions to fluid improvisations.

From this point forward, GANSynth and GANSpaceSynth models have been executed to generate sound samples which fed the soundscape created using live coding techniques. Undoubtedly, AI-driven sound synthesis techniques with live coding are not fully interpenetrated in such a workflow. This is caused by the fact that these AI models still necessitate a really powerful hardware setup to be used in real-time. Since it is not accessible at the time of production, instead they are combined in a consecutive manner, first generative AI and thereafter real-time performance.

Uncanny, Soundscape demo, Begüm Çelik, 2023

Artists experimented with different datasets in the training stage of the AI models. Starting with a dataset mostly containing string instrument samples, in order to achieve a greater variety they included random sounds thereafter. The last dataset used was including instrument samples like saxophone, cello, piano, guitar, breakbeats, urban city recordings, and also sounds of child toys like klaxon or rubber ducks. The resulting dataset was half an hour long in total. Experimenting with 4 million, 8 million, and 11 million image training and generating sound samples using many different stages among them led the artist to the conclusion that most of the sound samples were carrying similar noisy characteristics except a few which were quite distinct from the others. This majority of sounds felt like they were “unidentifiable but somehow familiar”. The whisperings, shoutings, or rustles heard were indecipherable and had no recognizable characteristics yet they were speaking a common language. This dark-themed atmosphere of the unknown directed the soundscape toward an uncanny space that can be defined as non-place in line with Marc Augé's description. Regarding the anthropological definition of non-places, the pursued atmosphere of the soundscape reflects a space that has no identity or history and can not be regarded as a place. Such a non-place where no organic connection in social context can exist matches this feeling of ‘indecipherable but familiar’ where no one can feel belonging yet still be acquainted. In line with this concept, the feeling of wandering around in an uncanny valley has been experienced in each stage of the project: (i) generating sound samples using AI models, (ii) sound manipulation, and (iii) composition and performance using live coding techniques. While doing so, the underlying structure of the GANSynth models has also been referred to by such a fictive setting of spatial sound. 

After generating sound samples using the above-mentioned AI methods, Tidal Cycles was chosen both for the recorded composition and the live performance. Tidal Cycles is a live coding tool for algorithmic music improvisation and composition, based on Haskell language and running SuperCollider in the backend for sound synthesis. Besides its allowance for generative, polyphonic, and polyrhythmic sound patterns, the cyclical structure of the environment presents a unique approach to composing and performing. Differing from usual music production software, the time is not arranged by sequencing samples consecutively in a linear manner but instead, the sounds are placed by dividing the circular flow. That being the case, Tidal provides a unique notion of time, contrary to the linearity of both traditional European notation and modern sequencers.

Randomized Landscapes, Generative Image, Tuğrul Şalcı, 2020

The live performance is planned to take place within the scope of Algorave. The submitted sound composition can be regarded as a short reflection of the proposed performance created using Tidal Cycles. The same methodology will be pursued during the performance; the soundscape will be live-coded using AI-generated samples. The music performance will be accompanied by audio-reactive live visuals inspired by the work of generative landscape artist Tuğrul Şalcı. The visual performance takes the inspiration from artist’s earlier work, Randomized Landscapes. Randomized Landscapes are a series of collages generated based on noise algorithms using TouchDesigner. Monochrome abstract landscapes map various spatial elements; with the different temporal and scalar pieces the intent is to reach a fragmented perception. The series is an attempt to create imagery that is undefinable and unpredictable, free from the memories and information attached to specific places and objects. For the Uncanny performance, the work will be transformed into continuous flux while reacting to sound performance. Thus, both the sound and visuals will be generated in real-time via live coding techniques to present an audio-visual performance. Contrary to the Randomized Landscapes series, this time the photos of the landscapes won’t be pre-captured yet will be created using text-to-image generative AI methods. In line with the conception of non-existing uncanny land, this time the landscapes will also not be reflecting a real place. Coming from the point of the machine envisioning a space full of data where we are wandering around, we are carrying the hope to find some hidden lands in this multi-dimensional space of images. These unknown lands correspond to non-places and uncanny valleys conveyed throughout the work. These images will be generated live with existing AI techniques, and manipulated using live coding in a similar fashion with Randomized Landscapes.

Both of the artists have already been part of the Algorave Istanbul community. Tuğrul Şalcı, aka Uzak, incorporates live coding into audiovisual performances in the underground scene of Istanbul since 2018. Uzak, the coordinator of Algorave Istanbul, has organized various events and workshops in prominent cultural institutions of the city, additionally curated and compiled five EPs. He took part in those EPs with two different projects namely, Uzak and Nightdive. More recently, Uzak has been focusing on combining live coding music with acoustic instruments in genres such as industrial rock and trip-hop. Begüm Çelik companied Uzak’s industrial punk duo called Ruunt with her audio-reactive cloud shaders in the last Algorave Istanbul event. 


Artist Bios:

Multidisciplinary artist Begüm ÇELİK is pursuing her master’s degree in Visual Arts & Visual Communication Design program under the supervision of Selçuk Artut at Sabancı University where she completed her B.Sc. in Computer Science & Engineering in 2021. Her master thesis titled “Conserving Multimedia Art from Artistic, Curatorial, and Historicist Perspectives: Case Study on Teoman Madra Archive” focuses on both the media art history in Turkey and archival strategies. Her artistic production is fed by her interdisciplinary journey by combining technology and performance in accordance with her engagement with various theater practices. Çelik’s academic research focuses on the preservation of technological artworks in continuation to her projects titled “Photometric Approach to Surface Reconstruction of Oil Paintings” and “Testing Method for Software-based Artworks” which were completed in collaboration with Sakıp Sabancı Museum, Istanbul. Recently, she completed the conservation project of Stephan von Huene's artwork called What’s Wrong with Art? at ZKM Karlsruhe under the supervision of Daniel Heiss and Morgane Stricot. 

Tuğrul Şalcı is an independent artist and designer working in the fields of generative arts and (new) media theory. He is inspired by both classical and contemporary art and combines them with a critical approach to the current sociocultural problems caused by digital media. In addition to his generative audio-visual installations, he also performs live coding electronic music. He received his master's degree from Sabancı University's Visual Arts and Visual Communication Design department; and continues his academic studies as a Research Assistant at Özyeğin University.


Artist Portfolios:


Tech Rider:

Uncanny Audio-Visual Performance Tech Rider Algorave

Comments
0
comment
No comments here
Why not start the discussion?