捆绑SM社区
The Multimodal Interaction Laboratory (MIL), based at 捆绑SM社区, is affiliated with the School of Information Studies (SIS) and with CIRMMT, the Centre for Interdisciplinary Research聽on Music Media and Technology. The lab is directed by Dr. Catherine Guastavino.
For more information, visit the website.
Research areas
With her graduate students, Catherine Guastavino investigates two main branches of research: Human-Computer Interaction and Music Archiving/Retrieval.
Human-Computer Interaction
In the field of Human-Computer Interaction, research aims at designing intuitive multimodal computer interfaces by involving end users聽throughout a collaborative design process. Specifically, researchers investigate what information can be best conveyed using interactive sound, touch聽and 3D graphics as opposed to relying on text and images. Presenting information to the auditory and touch modalities can be used to reduce聽visual overload and enhance user experience and immersion in computer-mediated environments.
Current projects focus on the design and聽evaluation of:
- Information visualization and auditory feedback to enhance navigation of subject hierarchies and audio-visual collections.
- Spatial audio rendering for virtual reality and aircraft simulators; auditory and vibratory comfort.
- Design of audio and haptic (touch) feedback for portable devices to optimize screen space.
Music Archiving/Retrieval
The second branch of research in Music Archiving/Retrieval focuses on how to digitize, organize and access sound recordings and musical聽collections. 聽To do so, researchers investigate the sound quality of various digital audio formats. They also explore what makes two sounds or pieces of聽music sound similar and study the effects of various musical facets on music preference and recognition. Finally they apply these findings to the聽design of Music Information Retrieval systems.
Current projects include:
- Best practices for the production and digitization of sound recordings; preservation of musical works with technological components.
- Rhythmic and melodic similarity and their possible use for automated music classification.
- Everyday sound categorization; 聽soundscape recording, reproduction and documentation.