LECTRA (Rich Transcription of Lectures for E-Learning Applications)

From HLT@INESC-ID

Revision as of 16:15, 30 January 2008 by Imt (Talk | contribs)

Lectra pcm.jpg Lectra tei.jpg Lectra algebra.jpg

OPEN RESEARCH POSITION - DEADLINE JULY 30 2007

Sponsored by: FCT (POSC/PLP/58697/2004)
Start: March 2005
End: December 2007

Goals

The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications. This implies taking the recorded audio-visual signal and adding the automatically produced speech transcription as caption. The greatest research challenges are the adaptation of the recognition models to the very difficult domain of University lectures, the recognition of spontaneous speech, namely in what concerns disfluencies (filled pauses, repetitions, hesitations, false starts, etc.), and the enrichment of the automatic speech transcription with punctuation and capitalization. Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.

Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.

Team

Project Leader: Isabel Trancoso

This project is done with the cooperation of IMMI (Intelligent MultiModal Interfaces), led by Prof. Joaquim Jorge.

Summary

The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications.

The project encompassed 5 main tasks. The first one deals with the collection of the training and test material of the set of 5 selected courses. This involved not only the recordings of the audio-video signals, but also the collection of support text material for these courses (e.g. text book, problems, viewgraphs), and the manual annotation of a subset of the recorded data.

In the second task we used this training data to adapt the acoustic, lexical and language models of our large vocabulary continuous speech recognizer to the course domain, thus yielding a first transcription of the lecture contents. This involved namely building interpolated language models for the 5 courses, and exploring unsupervised learning approaches for acoustic model adaptation. The latter implied the implementation of confidence measures in our general purpose recognition engine.

The third task had as a goal to "enrich" this first transcription with metadata that would render it more intelligible. Given the state of the art in terms of metadata extraction and the comparatively low recognition rate for spontaneous speech relative to read speech, this task was the most challenging one. We proceeded in two different directions: the study of disfluencies in European Portuguese and the enrichment of the automatically produced transcription with punctuation and capitalization. In what concerns disfluencies, particular attention was devoted to the analysis and modeling of filled pauses, recently complemented with the study of prolongations. The work on punctuation and capitalization started with a different type of corpus (broadcast news), mostly because of the much larger size of this corpus at the start of this project, but also because it is useful to do the first experiments with read speech before proceeding to spontaneous speech and this corpus has large quantities of both. We believe that producing a surface rich transcription is essential to make the recognition output intelligible for hearing impaired students.

The fourth task dealt with integrating the recorded audio-video and corresponding transcription with the other multimedia contents, so that a student may browse through the contents, seeing a viewgraph, the corresponding part in the text book, and the audio-video with the corresponding lecture transcription as caption. This work was greatly facilitated by the cooperation with the IMMI (Intelligent Multimodal Interfaces) group of INESC-ID, led by Prof. Joaquim Jorge, who was also one of the voluntary teachers of our recordings. The web browsing interface built in their Virtual Curricula project is very well suited to the needs of the LECTRA project.

The final task is user evaluation. Due to the difficulties of arranging a panel of both normal hearing and hearing impaired students in off-line experiments, we decided to conduct an on-line recognition experiment. For this purpose, a course on Object Oriented Programming was recorded during the last semester. Besides recording the video course, the audio was also in parallel fed into our recognizer. This experiment allowed us to identify the main problems that still affect the recognition of spontaneous speech, and in particular of classroom lectures.

Workplan

  • T1 - Data collection
  • T2 - Model adaptation
  • T3 - Metadata extraction
  • T4 - Integration of lecture transcription with other multimedia conetnts
  • T5 - User evaluation

Corpus

The LECTRA corpus has been recorded by GAEL (Gabinete de Apoio à Criação de Conteúdos Multimédia e e-Learning, IST).

Two very different courses have been selected for our pilot study: one entitled "Economic Theory I" (ETI) and another one entitled "Production of Multimedia Contents" (PMC). The ETI course (17 classes) and the first 6 classes of the PMC course were recorded with a lapel microphone. The last part of the PMC course (14 classes) was recorded with a head-mounted microphone.

The two recording types presented specific problems. The lapel microphone proved inadequate for this type of recordings given the very high frequency of head turning of the teacher (towards the screen or the white board) that caused very audible intensity fluctuations. The use of the head-mounted microphone clearly improved the audio quality. However, 11% of the recordings were saturated, due to the increase of the recording sound level during the students' questions, in the segments that were recorded right after them.

The classes had variable duration, ranging from 40 to 90 minutes. Both professors were male speakers, with Lisbon accent. Segments from students were not transcribed, as most were not intelligible enough, due to the distance to the microphone.

The manual transcription of this pilot corpus is in progress, using the Transcriber tool. Currently, 5 classes from each course have been transcribed.

Publications

Isabel Trancoso, Ricardo Nunes, Luís Neves, C. Viana, H. Moniz, D. Caseiro, A. Isabel Mata, Recognition of Classroom Lectures in European Portuguese, In Proc. INTERSPEECH 2006, Pittsburgh, September 2006

Isabel Trancoso, Ricardo Nunes, Luís Neves, Classroom Lecture Recognition, In Computational Processing of the Portuguese Language: 7th International Workshop, PROPOR 2006, Springer, pages 190 - 199, May 2006

Demos

This demo shows the result of the application of our Broadcast News recognizer after adaptation of the acoustic, lexical and language models to the course domain (Production of Multimedia Contents).