Skip to main content
SearchLoginLogin or Signup

Organic Algorithmic Composition

AIMC2023 Performance Proposal

Published onAug 29, 2023
Organic Algorithmic Composition
·

Abstract

“Organic Algorithmic Composition” is not just a musical composition. Instead, it is a sonic exploration that raises the question of whether sounds can self-organize within a given system. 

A continuous chain of musical possibilities is generated with a simulation of one of the slime molds, "Physarum Polycephalum.” The stages of generation and modulation will be implemented with Physarum’s decision-making method with a multi-agent model. In generation/modulation stages are operated with several agent-based algorithms to get a correlation. So that It generates non-linear musical behavior in each phase, creating unique musical patterns. 

Simulations of Physarum (a genus of slime molds) and Rave(Callion, Phillipe 2021) Autoencoder are played in parallel. They converge, diverge, or neglect and oscillate within a system. This local oscillation creates unique sound patterns: harmonized feedback, smearing, and displacements. Physarum simulation data transforms into sounds. It controls sound oscillation, modulation, and the nonlinear musical behavior of each phase, creating unique musical patterns underlying the principle of Self-organizing in music that Interwining Organisms and Modern computational functions in AI.

For the AI, Ircam's Rave Autoencoder will be implemented for sound processing with live performance to achieve Sound's self-organizing like organisms. The interconnection between the simulation of physarum and Rave under the principals of nonlinear, bottom-up approaches in a music compositional system explores the new auditory domain. It unfolds the possible connections between humans and non-humans, objects and organisms, and reality and non-reality. It intertwines machine algorithms with sound and uses the principles of an organism's behavior to expand the possibilities of self-organizing sound and the organicity of sounds in space. 

Visual made with Generative algorithms of Physarum polycephalum
Gyuchul Moon


  • Project Description

    »Organic Algorithmic Composition « aims to create a program and algorithm for sound synthesis and generating music forms. It implements the simulation of Physarum and the concept of neural networks. It is not just a musical composition. Instead, a sonic exploration raises whether sounds can self-organize or be formed within a specific system.

    For several decades, musicians and composers have developed their musical approach. Mainly, Xenakis and Koening each focused on the process of composition with system approaches that apply math and a systematic approach, expanding the ways of sound synthesis and constructing form within a system. Holtzman defines the difference between standard and nonstandard synthesis. Standard synthesis is characterized by an implementation process where the sound is described in some acoustic model. Machine instructions are ordered in such a way as to simulate the sound described. In the nonstandard approach, given a set of instructions, it relates them one to another in terms of a system that does not refer to some super-ordinated model, ... and the relationships formed are themselves the description of the sound (Holtzman 1978). 

    But, Most of the natures modeled by the traditional mathematical methods of physics are linear (Heylighen 2020). In a linear system, the output (effects) is proportional to the input (causes). But many natural phenomena and systems are intrinsically nonlinear, having no proportional relation between causes and effects (Sanfilippo, Andrea 2013). Holtzman argues that the difference between standard and nonstandard is in the role of sound within the system. Standard and nonstandard synthesis are both ruled by the main system’s instructions. But in the case of nonstandard synthesis, each subset within the system connects within a self-contained framework that does not rely on any overarching model or way of behavior. From the systematic perspective, standard synthesis can be defined as a rule-based, Top-down approach.
    In contrast, nonstandard synthesis can be defined as a bottom-up approach common with an organism’s self-organizing process. This paper mainly focuses on developing a nonstandard synthesis based on systematic approaches derived from nonlinear behaviors in music. In the research process, I primarily focused on the behavior of Physraum and the reconstructing sound features of Auto Encoder to simultaneously explore sound synthesis and different types of tools for composition.

    Development

    Physarum is a large Amoeboid Myxomeycete organism. It adapts its body plan during its complex life cycle to various environmental stimuli (nutrient attractants, repellents, hazards). Physarum has drawn considerable attention from researchers due to its simple cellular structure and decentralized control system. 

     An early model of Physarum was focused on individual biological aspects of its behavior, most notably the generation, coupling, and phase interactions between oscillators within the plasmodium. More recently, the overall behavior of the organism has been modeled in attempts to discover more about its distributed computation abilities (Jones 2015). Over the past few years, there has been a surge in research exploring the computational capabilities of Physarum, primarily driven by Toshiyuki Nakagaki (Nakagaki. T) (Jones 2011). Nakagaki has researched the subsequent analysis engaged Physarum’s behavior expands to the computational abilities. Which are solving path planning problems  (Nakagaki, Yamada 2000) and combinational optimization problems. (Aono, Hara 2007)

    For several decades, many musicians have made their pieces with specific mathematical functions or physical modeling to derive interesting musical events or emergent behavior in music. The procedure of algorithmic composition is considered not only writing a particular algorithm to generate musical events but also existing data that can be mapped or adapted to the musical factors. Biological ecosystems at various metaphorical levels have been used for creative discovery (McCormack, 2012). In such systems, rather than creating complex top-down approaches for control or creation, individual entities within an ecosystem can make decisions and interact with others to create what Eigenfeldt (Eigenfeldt, 2010) states as successful complex, dynamic and emergent systems. (Pearse 2016)


    Physarum simulation will be implemented in the multi-agents simulation with GLSL(OpenGL Shading Language) to expand the ways of synthesis and underlying principles of self-organizing in music with non-linear, emergent behavior. According to the multi-agent model in this paper (Jones 2015). Simulation of Physarum, not just using it to generate visual patterns, but local oscillation patterns are directly translated with sounds. Each Agent has a “window” (square in simulation). It can monitor the direction of specific agent behavior and specify the adjacent's local oscillation data. The real-time data of the agents generate the sound with the 2048 sampled wavetable synthesis method. So that Within a simulation system, each specific agent’s local movements and patterns, not just the result of the visual simulation, each local patterns directly connected with the sound generation stage. 

    Generated Sound from simulation and Rave AutoEncoder play in parallel within a system. This paper (Canonne, Garnier, 2011) illustrates the nonlinear behavior in free improvisation and demonstrates the mathematical loads derived from thermal physics for nonlinear behavior in improvisation on a stage. In this manner, a derived real-time correlation of each agent can be represented as specific values. The values control for sounds between the sound of the Physarum simulation and the Rave (Caillon, Philippe 2021) Autoencoder. And the amount of negative feedback in a system. This correlation and the combination of the simulation data, such as the agent speed, direction, and density, will be used as the deterministic factor for the music forms.

    Rave Autoencoder will be trained with Human voices, artificial hardware sounds, and sounds reminiscent of ghosts and extraterrestrials are trained with artificial intelligence to explore the new auditory domain and unfold the possible connections between humans and non-humans, objects and organisms, and reality and non-reality. It intertwines machine algorithms with sound and uses the principles of an organism's behavior to expand the possibilities of self-organizing sound and the organicity of sounds in space.

Internal sound process diagram

Conclusion


The suggested approach of the thesis is to reach the goal of “self-organize” in music from the simulation of Physarum and neural networks, which contain algorithms for music systems that consist of a small-scale structural set. Based on interdisciplinary research, each substructure is designed as a single organism to become a whole system as a music form. It creates a form that can potentially develop the principle of a self-organized and evolving system. This approach is not just modifying the algorithmic of organisms to sounds. It directly connects with sound generation, and the correlation of the subscale of motility defines the whole music form. A data set that derives from the simulation of Physarum is directly connected with AI for sound generating and regulating the feedback of the entire system through the process of intertwining organic and contemporary computational technology. Under the self-organizing principle, It creates music with emergent, dynamic musical behavior. And also, Searching the territory between composition and organisms' behavior explores unheard territories of sound between existing and non-existing, organic and non-organic.

Technical Rider

2 ch Jack audio outputs

Ethics statement

As an artist using programming synthesis with AI, I am committed to upholding ethical principles in my artwork. Using technology in the creative process carries unique ethical challenges and responsibilities.

I want to clarify that the works I will create using programming synthesis with AI are not intended for financial gain. Instead, they are a personal exploration of the possibilities of real-time sound work, motivated by my curiosity and desire to experiment with new artistic techniques.

In creating my artwork, I strive to maintain the following ethical principles:

  1. Respect for others: I will not create artwork that denigrates, discriminates against, or harms individuals or groups based on their race, ethnicity, gender, sexual orientation, religion, or any other characteristic. I am aware of the potential societal impact of my artwork, and I strive to create works that are inclusive, diverse, and representative of a wide range of perspectives and experiences.

  2. Authenticity and integrity: I will not plagiarize or appropriate the work of others without proper attribution or permission. I recognize that using programming synthesis with AI carries unique ethical considerations related to intellectual property. I will ensure that my artwork is an original expression of my creative vision.

  3. Transparency and accountability: I will be transparent about my creative process, including using programming synthesis with AI, and provide clear information about the sources of my data and algorithms. Artists have a responsibility to be accountable for their actions and consider their work's potential impact on society and the environment.

  4. Social responsibility: Art can be a powerful tool for social change, and I am committed to using my artwork to promote positive social and environmental impact. I recognize that using programming synthesis with AI carries potential societal impact. I strive to create works that raise awareness about critical social issues, promote diversity and inclusivity, and encourage positive social and environmental action.

  5. Environmental responsibility: I recognize the impact that art production can have on the environment, and I am committed to minimizing the environmental impact of my artwork. I strive to use sustainable materials and techniques whenever possible and try to reduce waste and pollution in my creative process.

By adhering to these ethical principles, I hope to create artwork that reflects my values and beliefs and promotes positive social and environmental change. Programming synthesis with AI offers exciting possibilities for artistic expression, and I am committed to using this technology ethically and responsibly.

Name/Affiliation/Bio

Gyuchul Moon(*1988, KR/NL) is a Sound / Audiovisual / New media artist working at the interface of Science and Technology. He explores the underlying mechanisms of generative sounds through sound performance and installation. 

His project seeks to evoke new imaginations about sound. Based on fundamental questions about sound, technology, and complex systems and their entanglement, he explores the physical embodiment of sound, acting as a generative form. He explores a new musical form that bridges music with alternative technological perspectives and a new material understanding that uses AI, self-regulation, and self-organizing principles. Through that, Expanding the aesthetic of sound and exploring the possibility of the organic composition of sound.

He presented his sound / audiovisual works at ZKM (Karlsruhe, DE), Sonic Acts Festival (Amsterdam, NL), MUTEK (Japan + Mexico), (Montreal_virtual_expo, CA), SXSW (Austin, US), WeSA (Seoul, KR), PRECTXE (Bucheon, KR), and exhibition in Sejong Center (Seoul, KR), Art Center Nabi (Seoul, KR). He selected artist residency programs at ZKM(Sonic Space, Karlsruhe, DE)  and EMS(Elektronmusikstudion, Stockholm, SE).

Contact


Homepage: https://gyuchulmoon.com
E-mail: [email protected]
mobile: +31) (0)6 83652452

Programme Notes

The composition intertwines the simulation of Physarum (a genus of slime moles) and Deep Learning algorithms. Physarum exhibits convergence, divergence, oscillation within a multi-agent simulation. The resulting local oscillation of Physarum generates visual and auditory patterns governing sound oscillation and modulation. Rave (Real-time Audio Variational Autoencoder) is trained with human voices, artificial hardware sounds, and extraterrestrial- like tones. Rave is controlled and played parallel with Physarum simulation, generating non-linear behaviour and music forms. The approach between the simulation of organisms and AI explores links between humans and no-humans, objects and organisms, reality and unreality. Performance under the principle of self-organisation explores sonic possibilities and organic forms in music.

Bibliography

Aono, M. and Hara, M. (2007). Amoeba-based nonequilibrium neurocomputer utilizing fluctuations and instability. In Lecture notes in computer science, vol. 4618, p. 41

Caillon, A. and Philippe E. (2021). RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis. arXiv, December 15, 2021. http://arxiv.org/abs/2111.05011.


Eigenfeldt, A. (2010). Coming together: negotiated content by multi-agents. Proceedings of the 18th ACM international conference on Multimedia. ACM, 1583-1586.

Heylighen, F. (2020). The Science of Self-organization and Adaptivity. Organizational Intelligence and Learning, and Complexity: v. 3 EOLSS Publishers Co Ltd. 

Holtzman, S. R. (1978). “A Description of an Automatic Digital Sound Synthesis Instrument.” DAI research report No. 59. Edinburgh: Department of Artificial Intelligence. pp. 1

Jones, J. (2015). Applications of Multi-Agent Slime Mould Computing. arXiv, November 18, 2015. http://arxiv.org/abs/1511.05774. p3-5

Jones, J. (2010). Characteristics of Pattern Formation and Evolution in Approximations of Physarum Transport Networks. Artificial Life 16, no. 2 (April 2010): 127–53. https://doi.org/10.1162/artl.2010.16.2.16202

McCormack, J. (2012). Creative ecosystems. Computers and creativity. Springer.

Nakagaki, T., Yamada, H., and Toth, A. (2000). Maze-solving by an amoeboid organism. Nature, p407-470

Pearse, S. (2016).  Agent-Based Graphic Sound Synthesis and Acousmatic Composition.  https://doi.org/10.13140/RG.2.2.11068.44165.

Sanfilippo, D. and Andrea V. (2013). “Feedback Systems: An Analytical Framework.” Computer Music Journal 37, no. 2 (June 2013): 12–27. https://doi.org/10.1162/COMJ_a_00176.

Comments
0
comment
No comments here
Why not start the discussion?