COST 2102: Difference between revisions

From HLT@INESC-ID

No edit summary
 
No edit summary
 
Line 1: Line 1:
COST Action 2102 - Cross-Modal Analysis of Verbal and Non-verbal Communication
The main objective of this COST Action is to develop an advanced acoustical, perceptual, and psychological analysis of verbal and nonverbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying the human emotional states. Several key aspects will be considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialog systems that could be exploited to improve the user access to future telecommunication services.
The main objective of this COST Action is to develop an advanced acoustical, perceptual, and psychological analysis of verbal and nonverbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying the human emotional states. Several key aspects will be considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialog systems that could be exploited to improve the user access to future telecommunication services.

Latest revision as of 07:58, 29 June 2006

COST Action 2102 - Cross-Modal Analysis of Verbal and Non-verbal Communication

The main objective of this COST Action is to develop an advanced acoustical, perceptual, and psychological analysis of verbal and nonverbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying the human emotional states. Several key aspects will be considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialog systems that could be exploited to improve the user access to future telecommunication services.