Vidi-Video (Interactive semantic video search with a large thesaurus of machine learned audio-visual concepts)

From HLT@INESC-ID

Revision as of 12:35, 9 February 2014 by Imt (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Funded by: EC VI Framework programme
Start date: 01 February 2007
Duration: 36 months

Summary

The VIDI-Video project takes on the challenge of creating a substantially enhanced semantic access to video, implemented in a search engine. The engine will boost the performance of video search by forming a 1000 element thesaurus detecting instances of audio, visual or mixed-media content.

Partners

  • UvA - Universiteit van Amsterdam, the Netherlands (coordinator)
  • UNIS - University of Surrey, UK
  • UNIFI – Universita degli Studi di Firenze, Italy
  • INESC-ID – Instituto de Engenharia de Sistemas e Computadores Investigação e Desenvolvimento em Lisboa, Portugal
  • CERTH – Centre for Research and Technology Hellas, Greece
  • CVC – Centroi de Vision por Computador, Spain
  • B&G – Stichting Netherlands Instituut voor Beeld & Geluid, the Netherlands
  • FRD - Fondazione Rinascimento Digitale, Italy Subcontracting
  • UoM - University of Modena e Reggio Emília, Italy
  • IIT – Indian Institute of Technology, India

INESC-ID main researchers

Description

The project will apply machine learning techniques to learn many different detectors from examples, using active one-class classifiers to minimize the need for annotated examples. The project approach is to let the system learn many, possibly weaker detectors describing different aspects of the video content instead of modeling a few of them carefully. The combination of many detectors will render a much richer basis for the semantics. The integration of audio and video analysis is essential for many types of search concepts.

Application scenarios

  • Broadcast news
  • Cultural heritage
  • Surveillance

See Also