Funded by: EC VI Framework programme
Start date: 1 February 2007
Duration: 36 months
VIDI-Video project takes on the challenge of creating a substantially enhanced semantic access to video, implemented in a search engine. The engine will boost the performance of video search by forming a 1000 element thesaurus detecting instances of audio, visual or mixed-media content.
UvA - Universiteit van Amsterdam, the Netherlands (coordinator)
UNIS - University of Surrey, UK
UNIFI – Universita degli Studi di Firenze, Italy
INESC-ID – Instituto de Engenharia de Sistemas e Computadores Investigação e Desenvolvimento em Lisboa, Portugal
CERTH – Centre for Research and Technology Hellas, Greece
CVC – Centroi de Vision por Computador, Spain
B&G – Stichting Netherlands Instituut voor Beeld & Geluid, the Netherlands
FRD - Fondazione Rinascimento Digitale, Italy Subcontracting
UoM - University of Modena e Reggio Emília, Italy
IIT – Indian Institute of Technology, India
João Paulo Neto
The project will apply machine learning techniques to learn many different detectors from examples, using active one-class classifiers to minimize the need for annotated examples. The project approach is to let the system learn many, possibly weaker detectors describing different aspects of the video content instead of modeling a few of them carefully. The combination of many detectors will render a much richer basis for the semantics. The integration of audio and video analysis is essential for many types of search concepts.