Human-Sound Interaction (HSI)

Humans sound interaction is a project which aims at the exploration of a new interaction design paradigm.
This project proposes the use of direct and embodied interaction with sound via a tangible and visible object. Here it is explored Human-Sound Interaction (HSI) through cross-modal representation of the auditory feedback. A namber of exploratory studies have been realised for the exploration of HSIs.

Human-Sound Interaction (HSI)

SoundSculpt is a system using holographic projection and mid-air haptic feedback where sounds are visualised and presented as a deformable container whose shape is determined by its associated spectral signature. With an interaction style analogous to moulding and sculpting, the user is able to boost and attenuate frequency components of the sound by interacting directly with the holographic visualisation of the container projected mid-air using hand gestures.

Interaction with 'sound-bjects'

System 2 enabled the exploration of the interaction with 'sound-object'. Here the term 'sound-object' is used to describe the visual representation of a discrete ound source within the virtual space. This differs from the Schaefferian definition of a object sonore (Schaeffer 1966), which refers to a sound event over time (i.e. that has fixed duration) that is perceptually separated from its source (e.g. the sound of door slamming played through a loudspeaker). In our proposed system sound-objects can represent single digital sound sources such as continuous or looped audio file playback, or a live audio input from a microphone or sound card's line input.

Interaction with sound in Mixed Realities

A first study explored the modes of with sound as the interaction with virtual objects which mirror the interaction with the real-world correspondent object.

Related Publications

Di Donato, B., Dewey, C. and Michailidis, T. (2020, forthcoming) Human-Sound Interaction: Towards a new interaction design paradigm. Proceedings of the 7th International Conference on Movement and Computing (MOCO). July 15–17. New Jersey, USA.

Di Donato, B., Dewey, C., and Michailidis, T. (2020, forthcoming) Human Sound Interaction. Proceedings of the International Conference on New Interfaces for Musical Expression, NIME'20, Royal Birmingham Conservatoire, BCU, July 21-25, 2020, Birmingham, UK. (Paper, Poster).

Di Donato, B., and Tychonas Michailidis (2019) Accessible interactive digital signage for visually impaired. ACM CHI'19 Workshop on Mid-Air Haptics Interfaces for Interactive Digital Signage and Kiosks. May 5–9, 2019, Glasgow, United Kingdom.

Di Donato, B. and and Arterbury, T. (2019) Embodied interaction with sound, Audio Developer Conference 2019, ADC19, London, UK.

Bullock, J. and Di Donato, B. (2016) Approaches to Visualising the Spatial Position of ‘Sound-objects’. Proceedings of the Electronic Visualisation and the Arts (EVA 2016), London, UK. 10.14236/ewic/EVA2016.4

Di Donato, B. and and Bullock, J., (2016) xDbox: A System for Mapping Beatboxers’ Already-Learned Gestures to Object-Based Audio Processing Parameters. Porto International Conference on Musical Gesture as Creative Interface Universidade Católica Portuguesa, Porto, Portugal. 10.13140/RG.2.2.36716.16006.

Di Donato, B. and and Bullock, J. (2015) gSpat: Live sound spatialisation using gestural control. ICAD 2015 Student Think Thank, University of Technology, Graz, Austria. 10.13140/RG.2.1.2089.7128