Content and Context Aware User Interfaces for Exploring Large Music Collections
From HLT@INESC-ID
George Tzanetakis |
![]() His pioneering work on musical genre classification received a IEEE signal processing society young author award and is frequently cited. More recently he has been exploring new interfaces for musical expression, music robotics, computational ethnomusicology, and computer-assisted music instrument tutoring. These interdisciplinary activities combine ideas from signal processing, perception, machine learning, sensors, actuators and human-computer interaction with the connecting theme of making computers better understand music to create more effective interactions with musicians and listeners. |
Addresses: www mail |
Date
- 12:30, Monday, April 20th, 2009
- Ea3, Torre Norte, IST
Speaker
- George Tzanetakis, Univ. Victoria, Canada
Abstract
The age of having five favorite cassettes of music for your car stereo are over. The explosive growth of digital music distribution and portable music players have made possible the creation of personal music collections that contain thousands of tracks. Browsing, exploring, and navigating these large music collections using only textual meta-data information is tedious. In this talk I will describe efforts on building content and context aware intelligent user interfaces to address this challenge. These interfaces combine advanced audio analysis, statistical supervised learning, visualization, human-computer interaction and controllers beyond the traditional keyboard and mouse to create novel ways of interacting with large music collections. In the talk I will summarize the evolution of these interfaces the past few years and provide more details on specific examples from my work. Users with vision or motor disabilities stand to benefit even from such "intelligent" interfaces and I will conclude some recent work my group has been doing on assistive music browsing.