Perceptual computing and Human Sensing Lab

Activities in the PHuSe Lab aim to bridge the gap between the signals gathered by the various modalities employed to sense humans (from physiological signals to perceptual and behavioural cues) and the understanding of such signals so to advance natural interfaces, social interaction, health and wellbeing. Current research concerns modelling and understanding human face identities and affective expressions, cognitive/emotional states and, more generally, non-verbal behaviours such as hand/body gestures or eye/gaze behaviour.
To such end we make use of a variety of sources and signals, from image and videos to depth-sensing (Kinect), physiological signals (EEG, ECG, EDA), eye-tracking data, fMRI and classic clinical/medical modalities. We address several theoretical approaches and tools such as sparse coding, dimensionality reduction and manifold learning, Bayesian graphical modelling and Bayesian Nonparametrics, stochastic differential equations and processes. To support this endeavour, we also exploit parallel computing, namely GPU computing and CUDA.

Current highlights

  • Data fusion
    Data fusion for emotion recognition
  • ECG compression
    ECG compression by k-LiMapS
  • Sparse coding
    LiMapS & k-LiMapS algorithms
  • Face recognition
    Face Recognition