The thesis addresses the problem of structuring the audio data in terms of speakers, i.e., finding the regions in the audio streams that belong to one speaker and joining each region of the same speaker together. The task of organizing the audio data in this way is known as speaker diarization and was first introduced in the NIST project of Rich Transcription in "Who spoke when" evaluations. The speaker-diarization problem is composed of several tasks. This thesis addresses three of them: speech/non-speech segmentation, speaker- and background-change detection, and speaker clustering.
The main objectives in our research were to develop new representations of audio data that were more suitable for each task and to improve the accuracy and increase the robustness of standard approaches under various acoustic and environmental conditions. The motivation for the improvement of the existing methods and the development of new procedures for speaker-diarization tasks is the design of a system for the speaker-based audio indexing of broadcast news shows.