CPU

Multimodal Affective Interaction

The communication modalities we study mainly concern visual (carried by the face and body), tactile, haptic, verbal, auditory or tangible information. They are studied independently or in combination, congruently or not, using static and dynamic expressions. The study of these perceptual modalities is rooted in the field of embodied cognition, and aims to interrogate low-level (non-reflexive and automatic) and high-level (reflexive) processes in the integration of information for the understanding of others’ actions, emotions or mental states, or for the construction of a situation model. In computerized interaction processing, this distinction between low-level and high-level is also reflected in the levels of descriptors.

Low-level processes

We are interested in motor resonance and emotional contagion, which are underpinned by low-level processes in the perception of others.

  • One of these processes is that of automatic anticipation of the continuation of a movement, naturally at play when perceiving the movement of an object, a body or a human face (Prigent et al., 2017, Dozolme et al., 2018). We are also investigating the possible modulation of this process, particularly during the perception of a social movement inviting or not the observer to an interaction (in collaboration with the Structures Formelles du Langage laboratory, UMRUnité Mixte de Recherche 7023, Université Paris 8). 
  • Other low-level processes studied concern those involved in the phenomenon of facial mimicry (Philip et al. 2017) and the way users perceive combinations of congruent and incongruent expressions of emotions across different modalities (facial expressions, postures, text, …) (Martin et al. 2018).
  • As part of collaborations between the AMIArchitectures et modèles pour l'Interaction and CPUCognition Perception et Usages groups, we are interrogating the perception of the combination of different stimuli (tactile, visual and auditory) (Tsalamlal et al. 2016, Gaffary et al. 2018) and also how interaction modalities (haptic, tactile, tangible) with devices promote the construction of a spatial representation (Arnaud et al., 2016, Bellik & Clavel, 2016, 2017).

High-level processes

The high-level processes studied relate more to the user experience, or to feelings of presence or immersion in the context of interaction with virtual reality devices.

  • In Mehdi Boukhris’s thesis (2012-15), this led to the question of the perceived fidelity of a virtual clone, in particular questioning the information needed to ensure a high degree of fidelity between a virtual agent and its real referent when expressing emotions.(link to thesis on HAL)
  • Since 2017, as part of Delphine Potdevin’s thesis (2017-2020), we have been studying how interaction modalities with an animated conversational agent can play on different dimensions of the user experience: perceived intimacy, corporate image, etc.(link to thesis on HAL)
  • In collaboration with the ILES group, we are seeking to produce a virtual agent communicating via French Sign Language (LSF) that is both accepted by deaf signers and understandable. Felix Bigand’s thesis (2018-2021), funded by the ROSETTA project, will enable us to explore the high and low-level cognitive processes involved in the perception of a motor movement, this time in the context of a fully-fledged language, LSF (link to thesis on HAL).