Feeling by Numbers (AV, stereo fixed medium) December 2021
This piece explores the sonification of deepface imaging data. This open-source ML Python library is able to score the emotion of a face in a given image: outputting 7 base emotional values for happiness, sadness, neutrality, surprise, anger, fear, and disgust.
A video of the artist’s face was made to generate outputs from deepface. This data was used to drive a network of oscillators via MaxMSP. Each oscillator was imbued with an individual characteristic that affects the other oscillators at given thresholds, creating a symbiotic synthesis engine.
These oscillators play out their raw data in the centre of the piece, around which the life cycle of an aestheticised algorithm develops.
These sounds were manipulated in a variety of overtly digital and synthetic ways to emphasise the surgically clean aesthetics of the source material and processes involved.
The piece sources some of its harsher sounds from data generated from a motion-tracking patch generated from the same video input.
The audio was then fed back into TouchDesigner to produce the audioreactive visuals, which rely on periodic noise feedback to generate their patterns.
Overall, the piece aims to explore the implicit politics and aesthetics involved in ML algorithms that categorise human emotion against the backdrop of the near-meaninglessness of less-complex, noisier algorithms.
The piece was originally premiered at the MANTIS festival in Manchester in 2022.
https://drive.google.com/file/d/1RojJZPDW1KOJ6XWTEwONdpx6kXMoT_ts/view?usp=sharing