LECTRA (Rich Transcription of Lectures for E-Learning Applications)

From HLT@INESC-ID

Revision as of 16:19, 30 January 2008 by Imt (Talk | contribs) (Corpus)

Lectra pcm.jpg Lectra tei.jpg Lectra algebra.jpg

OPEN RESEARCH POSITION - DEADLINE JULY 30 2007

Sponsored by: FCT (POSC/PLP/58697/2004)
Start: March 2005
End: December 2007

Goals

The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications. This implies taking the recorded audio-visual signal and adding the automatically produced speech transcription as caption. The greatest research challenges are the adaptation of the recognition models to the very difficult domain of University lectures, the recognition of spontaneous speech, namely in what concerns disfluencies (filled pauses, repetitions, hesitations, false starts, etc.), and the enrichment of the automatic speech transcription with punctuation and capitalization. Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.

Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.

Team

Project Leader: Isabel Trancoso

This project is done with the cooperation of IMMI (Intelligent MultiModal Interfaces), led by Prof. Joaquim Jorge.

Summary

The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications.

The project encompassed 5 main tasks. The first one deals with the collection of the training and test material of the set of 5 selected courses. This involved not only the recordings of the audio-video signals, but also the collection of support text material for these courses (e.g. text book, problems, viewgraphs), and the manual annotation of a subset of the recorded data.

In the second task we used this training data to adapt the acoustic, lexical and language models of our large vocabulary continuous speech recognizer to the course domain, thus yielding a first transcription of the lecture contents. This involved namely building interpolated language models for the 5 courses, and exploring unsupervised learning approaches for acoustic model adaptation. The latter implied the implementation of confidence measures in our general purpose recognition engine.

The third task had as a goal to "enrich" this first transcription with metadata that would render it more intelligible. Given the state of the art in terms of metadata extraction and the comparatively low recognition rate for spontaneous speech relative to read speech, this task was the most challenging one. We proceeded in two different directions: the study of disfluencies in European Portuguese and the enrichment of the automatically produced transcription with punctuation and capitalization. In what concerns disfluencies, particular attention was devoted to the analysis and modeling of filled pauses, recently complemented with the study of prolongations. The work on punctuation and capitalization started with a different type of corpus (broadcast news), mostly because of the much larger size of this corpus at the start of this project, but also because it is useful to do the first experiments with read speech before proceeding to spontaneous speech and this corpus has large quantities of both. We believe that producing a surface rich transcription is essential to make the recognition output intelligible for hearing impaired students.

The fourth task dealt with integrating the recorded audio-video and corresponding transcription with the other multimedia contents, so that a student may browse through the contents, seeing a viewgraph, the corresponding part in the text book, and the audio-video with the corresponding lecture transcription as caption. This work was greatly facilitated by the cooperation with the IMMI (Intelligent Multimodal Interfaces) group of INESC-ID, led by Prof. Joaquim Jorge, who was also one of the voluntary teachers of our recordings. The web browsing interface built in their Virtual Curricula project is very well suited to the needs of the LECTRA project.

The final task is user evaluation. Due to the difficulties of arranging a panel of both normal hearing and hearing impaired students in off-line experiments, we decided to conduct an on-line recognition experiment. For this purpose, a course on Object Oriented Programming was recorded during the last semester. Besides recording the video course, the audio was also in parallel fed into our recognizer. This experiment allowed us to identify the main problems that still affect the recognition of spontaneous speech, and in particular of classroom lectures.

Workplan

  • T1 - Data collection
  • T2 - Model adaptation
  • T3 - Metadata extraction
  • T4 - Integration of lecture transcription with other multimedia conetnts
  • T5 - User evaluation

Corpus

The LECTRA corpus includes audio, text and support materials for 5 courses: Production of Multimedia Contents, Economic Theory I, Linear Algebra, Introduction to Information and Communication Technologies, and Object Oriented Programming. On purpose, we selected very different and challenging courses, in order to analyze the influence of several factors.


This corpus is extremely important for studying spontaneous speech phenomena in European Portuguese. The study of filled pauses and prolongations that was done in the scope of this project is just one of the first steps. We are currently studying the influence of speaker dependent effects on the form and distribution of disfluencies, and also the influence of different types of support materials (slides, board).

Publications

Isabel Trancoso, Ricardo Nunes, Luís Neves, C. Viana, H. Moniz, D. Caseiro, A. Isabel Mata, Recognition of Classroom Lectures in European Portuguese, In Proc. INTERSPEECH 2006, Pittsburgh, September 2006

Isabel Trancoso, Ricardo Nunes, Luís Neves, Classroom Lecture Recognition, In Computational Processing of the Portuguese Language: 7th International Workshop, PROPOR 2006, Springer, pages 190 - 199, May 2006

Demos

This demo shows the result of the application of our Broadcast News recognizer after adaptation of the acoustic, lexical and language models to the course domain (Production of Multimedia Contents).