Sponsored by: FCT (POSC/PLP/58697/2004)
Start: March 2005
End: December 2007
The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications. This implies taking the recorded audio-visual signal and adding the automatically produced speech transcription as caption. The greatest research challenges are the adaptation of the recognition models to the very difficult domain of University lectures, the recognition of spontaneous speech, namely in what concerns disfluencies (filled pauses, repetitions, hesitations, false starts, etc.), and the enrichment of the automatic speech transcription with punctuation and capitalization. Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.
Producing automatic transcriptions of classroom lectures may be important for both e-learning and e-inclusion purposes.
Project Leader: Isabel Trancoso
This project is done with the cooperation of IMMI (Intelligent MultiModal Interfaces), led by Prof. Joaquim Jorge.
The goal of the LECTRA project was the production of multimedia lecture contents for e-learning applications.
The project encompassed 5 main tasks. The first one deals with the collection of the training and test material of the set of 5 selected courses. This involved not only the recordings of the audio-video signals, but also the collection of support text material for these courses (e.g. text book, problems, viewgraphs), and the manual annotation of a subset of the recorded data.
In the second task we used this training data to adapt the acoustic, lexical and language models of our large vocabulary continuous speech recognizer to the course domain, thus yielding a first transcription of the lecture contents. This involved namely building interpolated language models for the 5 courses, and exploring unsupervised learning approaches for acoustic model adaptation. The latter implied the implementation of confidence measures in our general purpose recognition engine.
The third task had as a goal to "enrich" this first transcription with metadata that would render it more intelligible. Given the state of the art in terms of metadata extraction and the comparatively low recognition rate for spontaneous speech relative to read speech, this task was the most challenging one. We proceeded in two different directions: the study of disfluencies in European Portuguese and the enrichment of the automatically produced transcription with punctuation and capitalization. In what concerns disfluencies, particular attention was devoted to the analysis and modeling of filled pauses, recently complemented with the study of prolongations. The work on punctuation and capitalization started with a different type of corpus (broadcast news), mostly because of the much larger size of this corpus at the start of this project, but also because it is useful to do the first experiments with read speech before proceeding to spontaneous speech and this corpus has large quantities of both. We believe that producing a surface rich transcription is essential to make the recognition output intelligible for hearing impaired students.
The fourth task dealt with integrating the recorded audio-video and corresponding transcription with the other multimedia contents, so that a student may browse through the contents, seeing a viewgraph, the corresponding part in the text book, and the audio-video with the corresponding lecture transcription as caption. This work was greatly facilitated by the cooperation with the IMMI (Intelligent Multimodal Interfaces) group of INESC-ID, led by Prof. Joaquim Jorge, who was also one of the voluntary teachers of our recordings. The web browsing interface built in their Virtual Curricula project is very well suited to the needs of the LECTRA project.
The final task is user evaluation. Due to the difficulties of arranging a panel of both normal hearing and hearing impaired students in off-line experiments, we decided to conduct an on-line recognition experiment. For this purpose, a course on Object Oriented Programming was recorded during the last semester. Besides recording the video course, the audio was also in parallel fed into our recognizer. This experiment allowed us to identify the main problems that still affect the recognition of spontaneous speech, and in particular of classroom lectures.
The LECTRA corpus includes audio, text and support materials for 5 courses: Production of Multimedia Contents, Economic Theory I, Linear Algebra, Introduction to Information and Communication Technologies, and Object Oriented Programming. On purpose, we selected very different and challenging courses, in order to analyze the influence of several factors.
This corpus is extremely important for studying spontaneous speech phenomena in European Portuguese. The study of filled pauses and prolongations that was done in the scope of this project is just one of the first steps.
Isabel Trancoso, Ricardo Nunes, Luís Neves, C. Viana, H. Moniz, D. Caseiro, A. Isabel Mata, Recognition of Classroom Lectures in European Portuguese, In Proc. INTERSPEECH 2006, Pittsburgh, September 2006
Isabel Trancoso, Ricardo Nunes, Luís Neves, Classroom Lecture Recognition, In Computational Processing of the Portuguese Language: 7th International Workshop, PROPOR 2006, Springer, pages 190 - 199, May 2006
Rui Pedro Batoreo Amaral, Hugo Meinedo, Diamantino António Caseiro, Isabel Trancoso, João Paulo da Silva Neto, A Prototype System for Selective Dissemination of Broadcast News in European Portuguese, EURASIP Journal on Advances in Signal Processing, Hindawi Publishing Corporation, vol. 2007, n. 37507, May 2007
Fernando Batista, Nuno J. Mamede, Diamantino António Caseiro, Isabel Trancoso, A Lightweight on-the-fly Capitalization System for Automatic Speech Recognition, In Recent Advances in Natural Language Processing, vol. 1, September 2007
Ciro Alexandre Domingues Martins, António Teixeira, João Paulo da Silva Neto, Dynamic Language Modeling for a Daily Broadcast News Transcription System, In ASRU 2007, December 2007
This demo shows the result of the application of our Broadcast News recognizer after adaptation of the acoustic, lexical and language models to the course domain (Production of Multimedia Contents).