Cognitive User Interfaces: an Engineering Approach

From HLT@INESC-ID

The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Steve Young
Steve Young
Steve Young
Steve Young received a BA in Electrical Sciences from Cambridge

University in 1973 and a PhD in Speech Processing in 1978. He held lectureships at both Manchester and Cambridge Universities before being elected to the Chair of Information Engineering at Cambridge University in 1994. He was a co-founder and Technical Director of Entropic Ltd from 1995 until 1999 when the company was taken over by Microsoft. After short period as an Architect at Microsoft, he returned full-time to the University in January 2001 where he is now Professor and Head of Information Engineering.

His research interests include speech recognition, language modelling, spoken dialogue and multi-media applications. He is the inventor and original author of the HTK Toolkit for building hidden Markov model-based recognition systems (see http://htk.eng.cam.ac.uk), and with Phil Woodland, he developed the HTK large vocabulary speech recognition system which has figured strongly in DARPA/NIST evaluations since it was first introduced in the early nineties. More recently he has developed statistical dialogue systems and pioneered the use of Partially Observable Markov Decision Processes for modelling them. He also has active research in voice transformation, emotion generation and HMM synthesis.

He has written and edited books on software engineering and speech processing, and he has published as author and co-author, more than 200 papers in these areas. He is a Fellow of the Royal Academy of Engineering, the Institute of Electrical Engineers and the Royal Society of Arts. He served as the senior editor of Computer Speech and Language from 1993 to 2004 and is now a member of the editorial board. He is a Senior Member of the IEEE and a member of the SPS Awards Committee. He was a member of the IEEE STC Committee from 1997 to 1999 and is currently a co-opted member. He has served on the technical committees of numerous workshops and conferences. He was the recipient of an IEEE Signal Processing Society Technical Achievement Award in 2004.

Addresses: www mail

Date

  • 15:00, Friday, November 13th, 2009
  • Room 336

Speaker

  • Steve Young, Cambridge University, UK

Abstract

A cognitive system is an information processing system which is able to adapt to its environment and learn from experience. In some sense it is "self-aware". It will typically utilize psychologically plausible computational representations of human cognitive processes as a basis for system designs that seek to engage the underlying mechanisms of human cognition. The benefits of the cognitive approach are nowhere more apparent than in the user interface itself, especially where the input modalities are prone to error such as in speech and gesture-based interfaces.

Most deployed user interfaces are "hard-wired" and "non-cognitive". Even if they take account of real human behavior in their initial design, once deployed their decision-making processes are frozen. They cannot learn from experience and no matter how many times they interact with a user, their ability to deal with noisy inputs, user misunderstandings and changing tasks does not improve. As a consequence, such systems are expensive to design, fragile in operation and difficult to maintain.

Whilst cognitive user interfaces provide a way forwards, their development to date has mostly been focused on integrating theories of cognitive psychology and formal linguistics into computational systems. However, it is not clear that systems built on such explicit models of behavior will be any less fragile than the ones they seek to replace. An alternative engineering approach is to structure systems as probabilistic models and then use machine learning methods to optimize and adapt on-line. The fundamentals underlying such an approach are firmly based on Bayes' theorem and Bellman's dynamic programming equation, and interestingly, there is growing evidence that humans learn using exactly the same principles.

This talk will argue that we must start designing systems which embody elements of cognitive behavior and that probabilistic inference provides the necessary engineering underpinning. It will begin by using a simple example to explain how human-computer interaction can be modeled as a partially-observable Markov decision process (POMDP). Recent results will then be presented which demonstrate that Bayesian inference and reinforcement learning also underpin the way that humans learn similar tasks. Using the example of a spoken dialogue system, some of the key issues in scaling POMDPs to real-world tasks will then be addressed. The talk will end by describing recent results obtained using a POMDP-based dialogue system to provide tourist information over the telephone.