Funded by: EC VI Framework programme
Start date: 01 February 2007
Duration: 36 months
The VIDI-Video project takes on the challenge of creating a substantially enhanced semantic access to video, implemented in a search engine. The engine will boost the performance of video search by forming a 1000 element thesaurus detecting instances of audio, visual or mixed-media content.
The project will apply machine learning techniques to learn many different detectors from examples, using active one-class classifiers to minimize the need for annotated examples. The project approach is to let the system learn many, possibly weaker detectors describing different aspects of the video content instead of modeling a few of them carefully. The combination of many detectors will render a much richer basis for the semantics. The integration of audio and video analysis is essential for many types of search concepts.