Morpheme:A Multidimensional Sketching Interface for Interaction with Corpus-Based Concatenative Synthesis
Morpheme allows the control of a concatenative synthesis module (see CataRT) through the act of sketching on a digital canvas. Tapping into the act of sketching and its phenomenal capacity as a medium to conceptualize and explore ideas and designs, we aim to identify a set of intuitive metaphors to facilitate visual exploration of data-driven sound synthesis. The aim is to enable synthesis of sound and the expression of compositional intention by providing perceptually meaningful visual description of sound attributes, in order to enable interaction with concatenative synthesis for creative purposes (e.g. sound design, electroacoustic composition). The implementation of concatenative synthesis which Morpheme uses works by segmenting a number of audio files into small units. The audio-units are then analysed, tagged with the analysis data, and stored in a database. Sound synthesis is accomplished by retrieving and recombining the audio units from the database in real-time, by matching the audio features to the features extracted from a target input data stream. Morpheme performs statistical analysis on a digital sketch developed by a practitioner, and it uses the data extracted from the canvas as the target feature for the selection of audio units. Through the iterative process of sketching (action, reflection production), control over the sound is attained.
Designing sound effects using the Morpheme interface
Please follow the link to watch a video that demonstrates a series of sound designs for visual media created using the system : Morpheme-Achromatic Mapping
The table below shows the audio-visual associations used in the two mapping (i.e. Achromatic and Chromatic mappings respectively). Each mapping consists of three A/V associations. The examples demonstrate each A/V feature association using three different audio corpora
|Achromatic Mappings – Audio-visual feature associations||Video example|
|Texture Compactness-Sound dissonance||Example|
|Chromatic Mapping – Audio-visual feature associations||Video examples|
|Color brightness-Spectral Centroid||Example|
|Brightness variance- Sound dissonance||Example|
Visual Feature Extraction
The table below explains how visual features are estimated.
|Visual Features||Method of Analysis|
|Size/Thickness||Estimated by filtering the background and counting the number of the remaining pixels.|
|Vertical position||Estimated based on the centroid of the pixels which are ON in a binary image, see Equation 1 & Equation 2|
|Texture Compactness||Estimated based on the ratio between the painted area and its’ perimeter, see Equation 3.|
|Texture variance||Estimated based on the entropy of the histogram of intensities of a greyscale image, Equation 4.|
|Color variance||Estimated based on the entropy of the histogram of HSL matrix (i.e. Hue, Saturation, and Lightness).|
|Brighness variance||Estimated based on the coefficient of variation of the histogram of the lightness matrix, see Equation 5|
|Color brightness||Mean lightness, filtering out background pixels (i.e. white pixels)|
Tsiros A. (2014). Evaluating the Similarity Between Audio-Visual Features Using Corpus-Based Concatenative Synthesis. Proceeding of the International Conference on New Interfaces for Musical Expression, London.
Tsiros A. (2013). The Dimensions and Complexities of Audiovisual Association. Conference Proceedings of Electronic Visualization and the Arts, London.
Tsiros A. (2013). A Multidimensional Sketching Interface for Corpus-Based Concatenative Synthesis. In Proceedings of International Conference of Auditory Display.
Tsiros, A., Leplatre, G., Smyth, M. (2012).Sketching Concatenative Synthesis: Audiovisual Isomorphism In Reduced Modes. 9th Sound and Music Computing Conference Proceedings. Copenhagen, Denmark.