Interactive Audiovisual Installation

Resonance of Motion: Walking into Memory

For my degree show project, I created an installation that explores the fragmented, fluid nature of memory through sound, motion, and personal video. As the user wears a motion sensor and moves through the space, distance from the projected image changes how the memory is experienced—sound shifts, visuals sharpen or blur, and the narrative unfolds.

Interaction Design

The interaction is facilitated by an ESP32-C3-Mini-1, which I made wearable through prototyping and 3D printing the final design. The case was designed to ensure a snug fit for the sensor, a securely fitting lid, and a hole to plug in the USB-C cable for charging. The final design was sewn onto a wristband designed specifically for wrist support in physical activity, providing adjustability, breathability, and minimizing the risk of allergy for users.

The sensor was programmed with C++ within Arduino IDE, using the Codecell, Wifi, and Esp_Wifi libraries and ESPNOW protocol to facilitate wireless peer-to-peer communication of motion and distance values. By sending the RSSI (received signal strength) and acceleration values to another sensor connected to my laptop, I was able to charge another backup battery for continuous use, as well as create wireless interaction and seamless communication with the audiovisual system in MaxMSP via serial.

Audiovisual System / Code Structure

Designed in Max MSP, the audiovisual system is comprised of video footage sourced from my personal archive, audio samples taken from these videos, and piano melodies I composed and recorded. The data from the wearable sensor is mapped to parameters of this system: with greater distance from the projection, the user hears fragmented piano and ambient echoes, and sees distorted video—disconnected, like distant thoughts. Closer, clarity emerges: voices, clearer images, and stories surface. The full code architecture is illustrated on the right.

With the wave of a hand, a new video is selected: this is guided by the implementation of Markov chains rather than fixed sequencing. Each video was tagged by location or theme: scenic (no people), ocean/beach, music (singing or piano), activities (biking, sledding, etc.) and running/moving/facing away from the camera. Some videos had multiple tags, reflecting the complex nature of memory compartmentalization. A set of weighted probabilities was designated for each video, so that depending on the current state, the next video that was triggered would be highly likely to be thematically connected. The same process was applied to trigger the spoken audio samples, connecting audio clips to ones that would logically follow. The result was a system that allowed narrative progression to occur, rather than having fixed sequencing or completely random transitions. The process of implementing this is shown below.

Code Structure Diagram

Thematic Tags

Markov Chain Implementation

Markov Weighted Probabilities

Next
Next

Deep Learning for Image Classification