Collaborative Intelligent Systems

Description

Context

We use theories and models from Cognitive Science and Multimodal Interaction to construct and evaluate intelligent systems that observe and model human cognition in order to provide human-aware collaboration with intelligent systems. Our research involves constructing and evaluating systems that observe, model and interact with humans at sensorimotor, corporal, operational and cognitive levels.

Activities

Our research program combines scientific investigation along four research axes:

1. Multi-modal perception of human actions, attention and emotion.
2. Visual, acoustic and corporal displays for interaction with humans
3. On-line modeling of human awareness, emotion and understanding
4. Interactive multimodal behaviors in ambient collaborative situations: data, models and evaluation

Chair events

1. Artificial Intelligence: a Rupture Technology for Innovation, AI4EU Web Café Webinar by James L. Crowley, 14 Nov 2019.
2. Artificial Intelligence for Human Computer Interaction - Keynote address by James Crowley, 31e Conférence sur l'Interaction Homme-Machine, IHM'19- 12 Dec 2019
3. Is Artificial Intelligence A Rupture Technology for Scientific Research? - Seminar Institut Neel, Univ Grenoble by J. L. Crowley - 31 Jan 2020
4. The Role of Emotion in Concept Formation and Recall during Problem Solving, Invited presentation at Humane AI Net workshop on AI and Human Memory, 23 Feb 2021.

Scientific publications

2021

  • N. Aboubakr, M. Popova, and J. L. Crowley, Color-based Fusion of MRI Modalities for Brain Tumor Segmentation, Proceedings of the 2021 International Conference on Medical Imaging and Computer Aided Diagnosis, MICAD 2021, April 2021.

2020

  • J. Cumin, G. Lefebvre, F. Ramparany and J. L. Crowley, "PSINES: Activity and Availability Prediction for Adaptive Ambient Intelligence", ACM Trans. Autonom. Adapt. Syst., TAAS, Vol. 15, No.1, pp 1-12. 2020.
     
  • Bailly, G. & F. Elisei (2020) Speech in action: designing challenges that require incremental processing of self and others' speech and performative gestures, Workshop on Natural Language Generation for Human-Robot Interaction at Human-Robot Interaction (HRI), Cambridge, UK

2019

  • J. Le Loudec, T. Guntz, J. L. Crowley and D. Vaufreydaz, "Deep learning investigation for chess player attention prediction using eye-tracking and game data", the 11th ACM Symposium on Eye Tracking Research & Applications, ETRA 2019, June 2019
     
  • N. Aboubakr, J. L. Crowley, R. Ronfard, Recognizing Manipulation Actions from State-Transformations, The Sixth Int. Workshop on Egocentric Perception, Interaction and Computing, EPIC 2019, IEEE Conference on Computer Vision and Pattern Recognition, June 2019.
     
  • J. L. Crowley, A, Paiva, G. O'Sullivan, A,. Nowak, C. Jonker, D. Pedreschi, F. Giannotti, F. van Harmelen, J. Hajic, J. van den Hoven, R. Chatila, Y. Rogers, Toward AI Systems that Augment and Empower Humans by Understanding Us, our Society and the World Around Us, May 2019

2018

  • Bailly, G. & F. Elisei (2018) Demonstrating and learning multimodal socio-communicative behaviors for HRI: building interactive models from immersive teleoperation data, AI-MHRI: AI for Multimodal Human Robot Interaction Workshop at the Federated AI Meeting (FAIM), Stockholm - Sweden: pp. 39-43.
     
  • Nguyen, D.-C., G. Bailly & F. Elisei (2018) Comparing cascaded LSTM architectures for generating gaze-aware head motion from speech in HAI task-oriented dialogs, HCI International, Las Vegas, USA: pp. 164-175.
     
  • Cambuzat, R., Elisei, F., Bailly, G., Simonin, O., & Spalanzani, A. (2018) Immersive teleoperation of the eye gaze of social robots, International Symposium on Robotics (ISR), Munich, Germany: pp. 232-239.
     
  • G. Nieto, F. Devernay and J. L Crowley, "Rendu basé image avec contraintes sur les gradients", Traitement du Signal, Lavoisier, pp1-26, 2018.
     
  • J Coutaz and J. L Crowley, "AppsGate, un écosystème domestique programmable", Journal d'Interaction Personne-Système (JIPS) 7 (1), 1-35, Nov 2018.
     
  • R. Brégier, F. Devernay, L. Leyrit, J. L. Crowley. Defining the Pose of any 3D Rigid Object and an Associated Distance. International Journal of Computer Vision, Springer Verlag, 126 (6), pp.571-596, 2018
     
  • T. Guntz, R. Balzarini, D. Vaufreydaz, and J.L. Crowley, "Multimodal Observation and Classification of People Engaged in Problem Solving: Application to Chess Players". Multimodal Technologies and Interaction, Vol 2 No. 2, p11, 2018.
     
  • T. Guntz, J.L. Crowley, D. Vaufreydaz, R. Balzarini, P. Dessus , The role of emotion in problem solving: first results from observing chess, Workshop on Modeling Cognitive Processes from Multimodal Data, at International Conference on Multimodal Interaction, ICMI 2018, Oct 2018.
     
  • S. Nabil, R. Balzarini, F Devernay, J.L. Crowley. Designing objective quality metrics for panoramic videos based on human perception, IMVIP 2018, Irish Machine Vision and Image Processing Conference, pp 1-4, Sept 2018.
     
  • N Aboubakr, R Ronfard, JL Crowley, Recognition and Localization of Food in Cooking Videos, Workshop on Multimedia for Cooking and Eating Activities, at International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Jul 2018.