LECTRA (Rich Transcription of Lectures for E-Learning Applications)


Revision as of 17:26, 10 July 2007 by Imt (Talk | contribs)

Lectra pcm.jpg Lectra tei.jpg Lectra algebra.jpg


Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.

The greatest research challenge is the recognition of spontaneous speech (error rate much higher than for read speech). Even human produced transcriptions would be very difficult to understand because of the absence of punctuation and the presence of disfluencies (filled pauses, repetitions, hesitations, false starts, etc.). Hence, one has to enrich the speech transcription by adding information about sentence boundaries and speech disfluencies.

Sponsored by: FCT (POSC/PLP/58697/2004)
Start: March 2005
End: September 2007


Project Leader: Isabel Trancoso

Undergraduate Students:

This project is done with the cooperation of IMMI (Intelligent MultiModal Interfaces), led by Prof. Joaquim Jorge.


The goal of this project is the production of multimedia lecture contents for e-learning applications. We shall take as a pilot study a course for which the didactic material (e.g. text book, problems, viewgraphs) is already electronically available and in Portuguese. This is an increasingly more frequent situation, namely in technical courses. Our contribution to these contents will be to add, for each lecture in the course, the recorded video signal and the synchronized lecture transcription. We believe that this synchronized transcription may be specially important for hearing-impaired students.

The project will encompass 5 main tasks. In the first one we shall collect the training and test material (both in terms of recorded audio-video signals and textual data) related to this course. In the second task we shall use this training data to adapt the acoustic, lexical and language models of our large vocabulary continuous speech recognizer to the course domain, thus yielding a first transcription of the lecture contents. The third task has as a goal to "enrich" this transcription with metadata that would render it more intelligible. Given the state of the art in terms of metadata extraction and the comparatively low recognition rate for spontaneous speech relative to read speech, this task is the one where the main research challenge resides. The fourth task deals with integrating the recorded audio-video and corresponding transcription with the other multimedia contents and synchronize them according to topic, so that a student may browse through the contents, seeing a viewgraph, the corresponding part in the text book, and the audio-video with the corresponding lecture transcription as caption. The final task is user evaluation for which we intend to use a panel of both normal hearing and hearing impaired students. For the latter, we shall evaluate two types of lecture transcription: with and without manual correction. This later evaluation will give us an indication of how close we are in terms of automatic lecture transcription to be able to use such tools in real-time in a classroom.


  • T1 - Data collection
  • T2 - Model adaptation
  • T3 - Spontaneous speech recognition
  • T4 - Integration of lecture transcription with other multimedia conetnts
  • T5 - User evaluation


The LECTRA corpus has been recorded by GAEL (Gabinete de Apoio à Criação de Conteúdos Multimédia e e-Learning, IST).

Two very different courses have been selected for our pilot study: one entitled "Economic Theory I" (ETI) and another one entitled "Production of Multimedia Contents" (PMC). The ETI course (17 classes) and the first 6 classes of the PMC course were recorded with a lapel microphone. The last part of the PMC course (14 classes) was recorded with a head-mounted microphone.

The two recording types presented specific problems. The lapel microphone proved inadequate for this type of recordings given the very high frequency of head turning of the teacher (towards the screen or the white board) that caused very audible intensity fluctuations. The use of the head-mounted microphone clearly improved the audio quality. However, 11% of the recordings were saturated, due to the increase of the recording sound level during the students' questions, in the segments that were recorded right after them.

The classes had variable duration, ranging from 40 to 90 minutes. Both professors were male speakers, with Lisbon accent. Segments from students were not transcribed, as most were not intelligible enough, due to the distance to the microphone.

The manual transcription of this pilot corpus is in progress, using the Transcriber tool. Currently, 5 classes from each course have been transcribed.


Isabel Trancoso, Ricardo Nunes, Luís Neves, C. Viana, H. Moniz, D. Caseiro, A. Isabel Mata, Recognition of Classroom Lectures in European Portuguese, In Proc. INTERSPEECH 2006, Pittsburgh, September 2006

Isabel Trancoso, Ricardo Nunes, Luís Neves, Classroom Lecture Recognition, In Computational Processing of the Portuguese Language: 7th International Workshop, PROPOR 2006, Springer, pages 190 - 199, May 2006


This demo shows the result of the application of our Broadcast News recognizer after adaptation of the acoustic, lexical and language models to the course domain (Production of Multimedia Contents).