Verbal description of LEGO blocks
From HLT@INESC-ID
Date
- 15:30, Friday, September 12th, 2014
- Room 336
Speaker
Abstract
Query specification for 3D object retrieval still relies on traditional interaction paradigms. The goal of our study was to identify the most natural methods to describe 3D objects, focusing on verbal and gestural expressions. Our case study uses LEGO blocks.
We started by collecting a corpus involving ten pairs of subjects, in which one participant requests blocks for building a model from another participant. This small corpus suggests that users prefer to describe 3D objects verbally, rarely resorting to gestures, and using them only as complement.
The paper describes this corpus, addressing the challenges that such verbal descriptions create for a speech understanding system, namely the long complex verbal descriptions, involving dimensions, shapes, colors, metaphors, and diminutives. The latter connote small size, endearment or insignificance, and are only very common in informal language. In this corpus, they occurred in one out of seven requests.
This experiment was the first step of the development of a prototype for searching LEGO blocks combining speech and stereoscopic 3D. Although the verbal interaction in the first version is limited to relatively simple queries, its combination with immersive visualization allows the user to explore query results in a dataset with virtual blocks.