Content and Context Aware User Interfaces for Exploring Large Music Collections

From HLT@INESC-ID

Revision as of 13:17, 15 April 2009 by Rdmr (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
George Tzanetakis
George Tzanetakis
George Tzanetakis
George Tzanetakis is an Assistant Professor in the Department of Computer Science with cross-listed appointments in ECE and Music at the University of Victoria. He received his PhD in Computer Science at Princeton University in 2002 and was a Post-Doctoral fellow at Carnegie Mellon University in 2002-2003. His research spans all stages of audio content analysis such as feature extraction, segmentation, classification with specific emphasis on music information retrieval.

His pioneering work on musical genre classification received a IEEE signal processing society young author award and is frequently cited. More recently he has been exploring new interfaces for musical expression, music robotics, computational ethnomusicology, and computer-assisted music instrument tutoring. These interdisciplinary activities combine ideas from signal processing, perception, machine learning, sensors, actuators and human-computer interaction with the connecting theme of making computers better understand music to create more effective interactions with musicians and listeners.

Addresses: www mail

Date

  • 12:30, Monday, April 20th, 2009
  • Ea3, Torre Norte, IST

Speaker

Abstract

The age of having five favorite cassettes of music for your car stereo are over. The explosive growth of digital music distribution and portable music players have made possible the creation of personal music collections that contain thousands of tracks. Browsing, exploring, and navigating these large music collections using only textual meta-data information is tedious. In this talk I will describe efforts on building content and context aware intelligent user interfaces to address this challenge. These interfaces combine advanced audio analysis, statistical supervised learning, visualization, human-computer interaction and controllers beyond the traditional keyboard and mouse to create novel ways of interacting with large music collections. In the talk I will summarize the evolution of these interfaces the past few years and provide more details on specific examples from my work. Users with vision or motor disabilities stand to benefit even from such "intelligent" interfaces and I will conclude some recent work my group has been doing on assistive music browsing.