Stephen Roddy PhD

Logo


Portfolio documenting and showcasing some recent projects.

Return to my Website

Urban Affect Networks

Project Overview

Urban Affect Networks emerged from a broader project entitled ‘Auditory Display for Large-scale IoT Networks’ carried out at the CONNECT Centre Trinity College Dublin. The Urban Affect Networks project links live electronic music performance, IoT Network Data and Artificial Intelligence techniques. Each performance draws data from networks of IoT devices placed around Dublin City. Network traffic data is mapped to control parameters of the live performance. How this takes place is mediated by a rule-based AI system called PerformIOT. The mood or affective state of the AI system is determined by the state of Dublin city, as represented through the IoT sensor data. The systems mood in turn determines the musical choices it makes while improvising alongside a human performer. Each performance with the system is unique as it represents a complex array of data relations which describe the state of Dublin City and any given time. The project involved the iterative development of the system with each performance acting as an evaluation after which the system would be expanded and further refined.

Iterative Development through Live Performance

The first iteration of ‘PerformIoT’ was a rule-based AI system that employed the traditional approaches that were pioneered during the era of Good Old Fashioned AI but have more recently fallen from favour in of machine learning driven approaches. The bulk of the functional code is written in Python and it is used to extend the capabilities of the Ableton Live 10 suite to leverage IoT data in live electronic music performance contexts. This early version of the PerformIoT AI system grew out of work undertaken to sonify IoT network data from a number of sources at CONNECT, the Science Foundation Ireland Research Centre for Future Networks headquartered at Trinity College Dublin. This iteration of the PerformIoT AI system retrieves data from the relevant APIs and maps it to OSC for use in a live performance setting.

Sonic Dreams 2017

The first use for the framework was for the piece ‘Noise Loops for Laptop, Improvised Electric Guitar and Dublin City Noise Data’. This was performed at the 2017 Sonic Dreams Festival. In this piece IoT data from sensors measuring ambient noise levels around Dublin city was mapped to control performance parameters of a live electric guitar improvisation. The data was mapped to control the timbre of the guitar utilizing a multiband distortion to morph the sound. The data was also mapped to control advanced buffer, delay and filtering processing of the performance and that also controlled the synthesis of percussive elements within the performance. In this iteration the system was mapped to control live DSP process which mashed up and remixed the performance in realtime on the basis of the IoT data


Dublin City Noise Loops 

xCoAx 2018

The next performance with the system titled ‘Signal to Noise Loops i++’’ took place at xCoAx in Madrid in 2018. This performance involved a more refined version of the the AI-driven PerformIOT system which had been further developed after the SAW festival performance. The system was updated to generate music alongside the human performer. Machine listening techniques were employed whereby the system would listen to what the human performer played and then make decisions about what it wanted to play as well as whether or not it wanted to intervene in the human’s performance. I gave a talk describing how this iteration of the system worked at the conference and took part in a broader artists panel also.


ISSTA & CSMC 2018

A third performance with an updated system took place at ISSTA 2018 in Derry/Londonderry and incorporated genetic algorithms into the PerformIoT system. A fourth performance with a newer version of the system also took place at CSMC 2018 in Dublin. Signal to Noise Loops i++ employed liine’s Lemur app for iOS to control the synthesis of audio materials in Native Instrument’s Reaktor. Reaktor also ran patches employing a mixture of additive and subtractive synthesis techniques to generate audio materials. In this performance traffic data from IoT devices around Dublin was mapped to control synthesis, timbral and performance parameters of the piece.
The current Submission Urban Affect Networks represents a further development of the system. The Ai-driven PerformIOT module has been updated to allow it even more control. Alongside generating its own musical material it now treats the material performed by the human player as ‘optional’. It listens to the material played by the human and decides what it likes and what it does not like, keeping and sometimes embellishing what it likes and completely re-writing what it does not like on the fly.
The system makes these decision based on the ‘mood’ or ‘affective state’ of Dublin city. The AI system accesses the IoT data and reads noise levels, pollution levels, traffic flows (pedestrian and vehicle), emergency warnings and weather data. These data points define the affective state or mood of the AI.
When the data represents a healthy and functioning city the AI is in a good mood and collaborates better with the human performer coordinating its music-making with that of the human. When the city is in a sub-optimal state, the AI has a negative affective state or mood and begins to overwrite the human performer and make more independent musical decisions that are reflective of the state of the city.



Signal to Noise Loops i++


The point of mapping data to sound, and more specifically IoT data is to leverage some of the interesting patterns that present themselves across data streams/sets of this manner. Data-driven music is different from sonification where the point is to faithfully communicate or represent the data to the listener. Data-driven music is closer in many ways to algorithmic music composition than it is to sonification because of its focus on finding patterns in the data that might be interesting when mapped to sonic and musical parameters. My previous data-driven music work has employed algorithmic composition techniques and dealt with used from the global financial crash. More recently I have begun to work with IoT data as I believe that the kinds of data we choose to measure and our reasons for measuring them say a lot about what a society values, cares about and finds interesting while the specific data measurements chronicle the complex interactions between people, the technologies they create and the worlds in which those people and technologies are situated.

While these explicit points of information may not be directly represented in a performance the rich interleaved patterns of interaction between people, place and technology are transposed into the sonic realm in each performance. While more abstract and implicit in nature it is the aesthetic dimensionality of these interlocked patterns, which is of interest to me.

Paper on earlier iteration of system:



Outputs & Activities

Performances:

Recordings

Link to Performances with earlier iterations of the System:</h4> Dublin City Noise Loops
https://open.spotify.com/track/63x9Nav3h61MNbcV6uycCX
https://music.apple.com/us/album/dublin-city-noise-loops/1450892433?i=1450892435
https://stephenroddy.bandcamp.com/track/dublin-city-noise-loops

Signal to Noise Loops i++
https://open.spotify.com/track/5B4bh4fpgR9mFqv3OgKDRs
https://music.apple.com/us/album/signal-to-noise-loops-i/1450892433?i=1450892434
https://stephenroddy.bandcamp.com/track/signal-to-noise-loops-i

Creative Skills

Sound Design. Music Composition. Live Electronic Music performance. Instrumental Guitar Performance. Audio Sound Art. Visual Design.

Technical

IoT Networks. Statistical Data Analytic. Python. HTTP, OSC & MIDI protocols. GOFAI. Evolutionary Computing. Audio DSP. Creative Coding. HTTP. HCI. Auditory Display & Sonification. Audio Engineering.

Tags

Urban Affect Networks. Music. Data. GOFAI.