UKRI CDT on Socially Intelligent Artificial Agents (2019-2027)
Acronym: SOCIAL
Funding Agency: UKRI
Duration: 102 months (April 2019 – September 2027)
PI: Alessandro Vinciarelli
Total Budget: £4,902,252 (ownership 45%, £2,206,013.4)
Website: http://www.social-cdt.org
The goal of the project is to train 50 PhD students in Artificial Social Intelligence, the AI domain aimed at endowing machines with social intelligence, the ability to deal with users’ intentions, attitudes, emotions, personality, etc.. The main training areas of the project are the science of social interactions (Psychology, Cognition and Neuroscience applied to human-human and human-machine interactions), the automatic analysis of human behaviour (AI driven detection and interpretation of nonverbal behavioural cues conveying social meaning), automatic synthesis of behaviour (generation of artificial nonverbal behavioural cues conveying socially meaningful information) and sociology of AI technologies (analysis of the impact that socially intelligent technologies have on organisations and society).
Partners: University of Glasgow (UK).
Transforming auditory-based social interaction and communication in AR/VR (2021-2025)
Acronym: SONICOM
Funding Agency: H2020
Duration: 60 months (January 2021 – June 2025)
co-PI: Alessandro Vinciarelli
Total Budget: Euros 499,876.25 (ownership 50%, Euros 249,938.12)
Immersive audio is our everyday experience of being able to hear and interact with sounds around us. Simulating spatially located sounds in virtual or augmented reality (VR/AR) must be done in a unique way for each individual and currently requires expensive and time-consuming individual measurements, making it commercially unfeasible. Furthermore, the impact of immersive audio beyond perceptual metrics such as presence and localisation is still an unexplored area of research, specifically when related with social interaction, entering the behavioural and cognitive realms. SONICOM will revolutionise the way we interact socially within AR/VR environments and applications by leveraging methods from Artificial Intelligence (AI) to design a new generation of immersive audio technologies and techniques, specifically looking at personalisation and customisation of the audio rendering. Using a data-driven approach, it will explore, map and model how the physical characteristics of spatialised auditory stimuli can influence observable behavioural, physiological, kinematic, and psychophysical reactions of listeners within social interaction scenarios. The developed techniques and models will be evaluated in an ecologically valid manner, exploiting AR/VR simulations as well as real-life scenarios, and developing appropriate hardware and software proofs-of-concept. Finally, in order to reinforce the idea of reproducible research and promoting future development and innovation in the area of auditory-based social interaction, the SONICOM Ecosystem will be created, which will include auditory data closely linked with model implementations and immersive audio rendering components.
Socially Competent Robots (2016-2020)
Acronym: SoCoRo
Funding Agency: EPSRC
Duration: 42 months (January 2017 – June 2020)
PI: Alessandro Vinciarelli
Total Budget: £355,564 (ownership 50%, £177,782)
According to 2011 census figures, Autism Spectrum Disorder (ASD) affects 695,000 people in the UK. Roughly 547,000 of these are 18 or over (1.3% of the adults in working age), and these adults encounter serious difficulties in their everyday life. In particular, the unemployment rate among adults with ASD is higher than 85%. This is nearly double the unemployment rate of 48% for the wider disabled population (ONS 2009) and compares to a current UK unemployment rate of 5.5% (ONS 2015). One reason for this is that people with ASD struggle to correctly interpret social signals, i.e., the expressive behavioural cues through which people manifest what they feel or think (facial expressions, vocalisations, gestures, etc.). Therefore, this project will develop a Socially-Competent Robot Training Buddy that will help adults with ASD to better deal with social signals in work-related scenarios.
Partners: University of Glasgow (UK), Heriot Watt University (UK).
MultiModal Mall Entertainment Robot (2016-2019)
Acronym: MuMMER
Funding Agency: Horizon 2020 – European Commission
Duration: 42 months (April 2016 – August 2019)
Coordinator: Mary Ellen Foster
Total Budget: Euros 5,345,135 (Euros 815,178 for the University of Glasgow, ownership 30%, Euros 244,553.4)
The project proposes to address the important and growing market of consumer entertainment robotics by advancing the technologies needed to support this area of robotics, and also by explicitly addressing issues of consumer acceptance, thus creating new European business and employment opportunities in consumer robotics. Specifically, the project will develop a humanoid robot (based on Aldebaran’s Pepper platform) able to engage and interact autonomously and naturally in the dynamic environments of a public shopping mall, providing an engaging and entertaining experience to the general public. Using co-design methods, the project will work together with stakeholders including customers, retailers, and business managers, to develop truly engaging robot behaviours, including telling jokes or playing games, as well as providing guidance, information, and collecting customer feedback. Crucially, the robot will exhibit behaviour that is socially appropriate, combining speech-based interaction with non-verbal communication and human-aware navigation. To support this behaviour, the project will develop and integrate new methods from audiovisual scene processing, social-signal processing, high-level action selection, and human-aware robot navigation. Throughout the project, the robot will be deployed in a large public shopping mall in Finland: initially for short visits to aid in collaborative scenario development, co-design, and system evaluation, and later for a long-term field study in the 4th year of the project. Through the co-design approach, the project will both study and foster acceptance of consumer robots and thus positively influence the consumer markets of service robots.
Partners: University of Glasgow (UK), Heriot Watt University (UK), Idiap Research Institute (Switzerland), CNRS (France), Aldebaran Robotics (France), Teknologian tutkimuskeskus VTT Oy (Finland) and Kiinteistö Oy Ideapark AB (Finland).
School Attachment Monitor (2015-2018)
Acronym: SAM
Funding Agency: EPSRC
Duration: 36 months (September 2015 – August 2018)
PI: Stephen Brewster
Total Budget: £776,875 (ownership 50%, £388,437.5)
Website: School Attachment Monitor
The goal of SAM is to make large-scale Attachment screening possible by reducing time and costs required for Manchester Child Attachment Story Task (MCAST) assessment. The approach consists of automating the key steps of MCAST to 1) reduce the time needed to complete the test (higher efficiency) and, 2) allow the involvement of personnel with no MCAST training (lower costs). The automation of MCAST is also expected to provide new insights into Attachment and its observable, machine detectable behavioural markers, enabling better future measurement of Attachment. The project will develop a computer-based tool which can be used to measure Attachment across the population in a rapid, cost-effective way to support MCAST assessors. The children will be guided through the story vignettes by an on-screen avatar. With SAM, the screening sessions and preliminary data analysis can be done without the presence of trained MCAST assessors; they would only be needed if a child was tagged as being in one of the problem categories, where a standard MCAST assessment would be undertaken, allowing large-scale population screening of Attachment patterns for the first time. The development of SAM and the rapid screening of Attachment in large groups will create a paradigm shift in the treatment of child psychiatric disorders.
Partners: University of Glasgow (UK), Glasgow City Council and National Society for the Prevention of Cruelty to Children.
Knowledge Extraction for Business Opportunities (2015-2016)
Acronym: KEBOP
Funding Agency: Data Lab (Scottish Funding Council)
Duration: 6 months (August 2015 – January 2016)
PI: Alessandro Vinciarelli
Total Budget: £48,199 (ownership 100%, £48,199)
Participating in large-scale events such as conventions, fairs or conferences is one of the main opportunities we have to meet the people that can change our life, whether this means the employer that perfectly fits our professional aspirations, the business partners allowing our company to grow or the customers that need our products and services. However, when an event involves hundreds (or even thousands) of people, it is difficult, if not impossible, to spot the few individuals that it is actually worth to meet. Serendipity can help, but it is not a strategy. The goal of this project is to enrich the Bizvento platform – successfully deployed at a large number of business, academic and social events – with technologies allowing people to move beyond serendipity and to strategically target their networking efforts towards people it is actually worth to meet. In particular, the project targets the development of technologies – based on the Bizvento platform – that automatically extract and analyse social affiliation networks. These latter identify groups of event participants that are close in terms of interests and needs and, therefore, are more likely to be useful to one another. In other words, the project technologies allow Bizvento to effectively connect people likely to be useful to one another.
Partners: University of Glasgow (UK) and Bizvento (UK).
Social Signal Processing Network (2009-2014)
Acronym: SSPNet
Funding Agency: FP7 – European Commission.
Duration: 60 months (February 2009 – January 2014)
Coordinator: Alessandro Vinciarelli
Total Budget: 6,262,000 Euros (1,082,664 Euros for the University of Glasgow, ownership 100%)
Website: www.sspnet.eu
SSPNet is a Network of Excellence aimed at establishing a European research community on Social Signal Processing, the emerging domain that investigates modelling, analysis and synthesis of social signals. The project includes both experts in technology (analysis and synthesis of human behavior) and human sciences (anthropology, social psychology) with the goal of developing a truly multidisciplinary approach to SSP. The ultimate goal of the project is to eliminate the entry barriers to SSP by 1) disseminating knowledge (creation of public literature repositories and organization of scientific events), 2) providing data (creation of public data and benchmarks repositories), and 3) providing tools (creation of public repositories of automatic systems for analysis and synthesis of social signals).
Partners: University of Glasgow (UK), Idiap Research Institute (Switzerland), Imperial College (UK), University of Edinburgh (UK), Queens University of Belfast (UK), University of Twente (The Netherlands), Technical University of Delft (The Netherlands), University of Geneva (Switzerland), CNRS (France), DFKI (Germany), University of Rome Tre (Italy), University of Gothenburg (Sweden).
Cross-Cultural Personality Perception (2009-2012)
Acronym: CCPP
Funding Agency: Swiss National Science Foundation.
Duration: 36 months (May 2009 – April 2012)
Principal Investigators: Alessandro Vinciarelli, Bayya Yegnanarayana
Team: Gelareh Mohammadi, Alessandro Vinciarelli
Total Budget: CHF 180,000 (ownership 100%)
Website: https://www.idiap.ch/scientific-research/projects/cross-cultural-personality-perception
Psychologists have shown that there is a correlation between nonverbal characteristics of speaking on one side, and personality traits as perceived by the listeners on the other side. For example, individuals that speak loud are perceived as more extroverted than individuals that speak soft, and individuals that speak fast are perceived as more brilliant than individuals that speak slow. The problem is that the mapping between nonverbal characteristics of speaking and perceived personality traits is, in many cases, culture dependent. In other words, the above examples are known to apply in southern Europe, but they can be wrong when applied in other cultural areas.
The goal of this project is to develop systems that address the above problem by “translating” automatically the personality of a speaker. This means that the nonverbal characteristics of a speaker, giving rise to certain personality perceptions in a given culture, should be modified automatically to give rise to the same personality perceptions in another culture. For example, the recording of a southern Mediterranean person speaking loud and fast should be modified so that the resulting voice has the nonverbal characteristics of an extrovert and brilliant person (see the above example) in the culture of a listener coming from an area different from southern Europe.
The project can be described as an application of personality psychology findings to speech analysis and synthesis. In fact, the project starts from the correlation between physical characteristics of the voice and personality traits and leads to engineering applications where 1) natural voices are analyzed to infer personality perceptions from physical characteristics, and 2) synthetic voices are modified to elicit desired personality perceptions.
Partners: Idiap Research Institute (Switzerland), University of Glasgow (UK), Indian Institute of Technology (India).
Enhanced Medical Multimodal data Access (2007-2011)
Acronym: EMMA
Funding Agency: Hasler Foundation (Switzerland)
Duration: 48 months (June 2007 – May 2011)
Principal Investigators: Barbara Caputo and Alessandro Vinciarelli
Team: Nicolae Suditu and Alessandro Vinciarelli
Total Budget: CHF 360,000 (ownership 50%)
The project revolves around the effective access to large collections of medical images accompanied by textual reports written by doctors. Alessandro Vinciarelli works on the development of multimodal relevance feedback techniques that model jointly images and accompanying texts. The goal is to discriminate images that are too similar from a visual point of view (even though they have different content) using text and vice-versa. The approach under investigation is based on a probabilistic Bayesian framework that modifies the probability of each image being relevant to a query based on the feedback provided by users. The project is still ongoing.
Partners: Idiap Research Institute (Switzerland).