Author Archives: alevincia

About alevincia

Full Professor at the University of Glasgow and Director of the UKRI Centre for Doctoral Training in Socially Intelligent Artificial Agents

New Paper on Depression for Cognitive Computation

We have published another paper about depression for Cognitive Computation:

N.Aloshban, A.Esposito and A.Vinciarelli, “What You Say Or How You Say It? Depression Detection Through Joint Modelling of Linguistic and Acoustic Aspects of Speech“, accepted for publication by Cognitive Computation, to appear 2020.

The key-point of the paper is that depression and control participants tend to. manifest their condition in different ways. In particular, while depressed individuals tend to show their pathology through nonverbal behaviour (how they speak), control ones tend to do it through their lexical choices (what they say). To the best of our knowledge, this is the first time that such an observation has been made in the literature.

Distinguished Lecture at Institute of Engineering and Management in India

I have been given the opportunity to give a Distinguished Lecture at the Institute of Engineering and Management in India:

https://iem.edu.in/news-events/iem-uem-foreign-distinguished-lecture-by-prof-alessandro-vinciarelli-university-of-glasgow-uk-on-7th-september-2020/

I have introduced the general aspects of Social Signal Processing and Social AI, in particular when it comes to the analysis of nonverbal behaviour in human-human and human-machine interactions. I have used the analysis of conflict as a particular example of how AI-driven technologies can be used to understand social and psychological phenomena. The lecture was attended by roughly 300 people.

Success at ICMI 2020

We got two full papers accepted at the next ACM International Conference on Multimodal Interactions:

Both articles revolve around mental health issues and, in particular, around the possibility to infer the mental conditions of people from their observable behaviour. In the case of depression, the approach takes into account what people say and how they say it. The focus is on the possibility to detect depression using only a few seconds of speech, something that it is important because depressed people find it difficult to speak for long time. In the case of attachment, the proposed approach analyses the way children use an interactive system and the results show that secure children tend to better interact with the design of the system, in line with the expectations of Attachment Theory.

It is a great pleasure to say that the publications have been obtained with the help of one of my most senior PhD students (Nujud Aloshban) and an experienced postdoctoral researchers with whom I have been collaborating for 5 years.

New Paper on Depression

A paper submitted in collaboration with Fuxiang Tao and Anna Exposition has been accepted for presentation at Interspeech 2020 (the conference is rated A by CORE):

F.Tao, A.Esposito and A.Vinciarelli, “Spotting the traces of depression in read speech: An Approach Based on Computational Paralinguistics and Social Signal Processing“, Proceedings of Interspeech, 2020.

The most interesting aspect of the article is that it uses a classifier to show that three particular behavioural cues (reading speed, number of pauses and average length of the pauses) can improve by close to 20 points the accuracy of classifier that discriminates between depressed and non-depressed readers. In other words, the paper proposes the use of a classifier as a means to test the association between depression and behavioural cues, in alternative to the common approaches used in psychology for the same purpose (e.g., correlational analysis). In addition, the paper shows that neuroscience findings, in particular the interplay between depression and neural mechanisms underlying language processing, can help to improve technology.

ANSA Announces New Paper

My recent work on Visual Social Distancing:

M.Cristani, A.Del Bue, V.Murino, F.Setti, A.Vinciarelli
The Visual SocialDistancing Problem
IEEE Access, Vol. 8, pp. 126876-126886, 2020
https://ieeexplore.ieee.org/document/9138385

has been featured in an article by ANSA, the most important press agency in Italy:

https://www.ansa.it/sito/notizie/tecnologia/hitech/2020/07/21/intelligenza-artificiale-anti-covid-misura-distanza-sicura_6e0da301-fb97-46f5-8f6a-d3c209e0d9b7.html

The article proposes an approach for the analysis of the way people gather in public spaces that takes into account not only physical, but also social aspects of interpersonal distance. The work is a nice example of how multiple disciplines (Computer Vision and Social Signal Processing in this case) can collaborate to address a technological problem.

 

Boundless Podcast Participation

I had the great pleasure to participate as a speaker in the Boundless Podcast, an initiative led by Richard Foster Fletcher to spread awareness about Artificial Intelligence and its latest developments. The discussion revolved around socially intelligent agents and, in particular, around their potential as a means to improve the life of people in the world.

The full recording of my interview is available at the following link:

Alessandro Vinciarelli at the Boundless Podcast

while the many other interesting episodes of the series can be accessed here.

An Interview for “Scientific American”

I have been mentioned in a recent article posted in the blog of the “Scientific American”, where I have been asked to give my opinion about the role of context in the recognition of the emotions and, more in general, in the understanding of human behaviour:

https://blogs.scientificamerican.com/observations/how-do-you-know-which-emotion-a-facial-expression-represents/

The article provides a very extensive account of the available opinions about the matter and it confirms how elusive the concept of context is. Any persona that has been working in technology for at least a decade has certainly passed through successive waves of “context-aware” or “context-sensitive” approaches, but the meaning of the word context remains overall uncertain. The blog post of the Scientific American tries to shed some light by asking a large number of experts what they think about and the result is highly instructive. From my point of view, the most important aspects are the interplay between the analysis of context and the application of multimodal technologies on one side and, on the other side, the increasingly growing amount of information that can be gathered through the pervasive network of sensors and technologies that constellates out life.

International Workshop on Social & Emotion AI for Industry

On September 3rd, I have successfully co-chaired with Olga Perepelkina the first International Workshop on Social and Emotion AI for Industry, in collaboration with Neurodata Lab:

http://seaixi.neurodatalab.com

The workshop was organised in Cambridge, in conjunction with the IEEE International Conference on Affective Computing and Intelligent Interaction and, according to the organisers of the conference, it has attracted the largest number of participants among the different workshops organised in the same day.

The overall goal of the workshop was to gather academic researchers and industry practitioners active in social and emotion AI, the domains aimed at developing technologies that deal with affective and interactional aspects of their users. The program included refereed papers (50% acceptance rate) as well as keynote presentations given by speakers active in both academia and industry.

We learned about the use AI driven technologies that help people affected by different neurological diseases thanks to Emteq and its CEO Charles Nduka, about the effectiveness of AI technologies for the affective analysis of audio thanks to Audeering and its CEO Dagmar Schuller, and about the activities of Microsoft about empathic computing thanks to Daniel McDuff.

From an academic point of view, the workshop has benefited from the talks by Catherine Pelachaud (an introduction to impression management fore virtual agents), Antonio Camurri (an introduction to his new project on next generation motion capture systems) and Andrew McStay (an ethical perspective on AI).

New Article on Personality Recognition

Since I study nonverbal communication, I have always been fascinated by our innate tendency to leak information about ourselves through our behaviour. The new article “What and ‘Ehm’ leaks about you: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, just accepted for publication by the IEEE Transactions on Affective Computing, is further step in such a direction. The work shows that one of our most minor behavioural cues, the short vocalisations like “ehm” and “uhm” that we utter when we do not know what to say next, conveys information about our personality traits. The results have been obtained by analysing around 3,000 fillers uttered by 120 persons.

The work has been done in collaboration with Mohammad Tayarani, former postdoc of mine that recently obtained a fellowship at the University of Hertfordshire, and Anna Esposito, one of my very first mentors and longtime friend and colleague. Mohammad has developed a new feature selection approach that allows one to spot the features (physical measurements we extract from fillers) most likely to account for personality. Anna has contributed with her deep knowledge of speech processing and her major experience in interdisciplinary work between Computing Science and Psychology. The IEEE Transactions on Affective Computing are in the top 5% of the Scimago Ranking and have an impact factor of 6.28.

New Article on Preferences of Prospective Students

I have published a new article that analyses the preferences of roughly 5,000 prospective students attending one of the Open Days organised by the University of Glasgow:

https://link.springer.com/chapter/10.1007/978-3-030-15939-9_5

The full citation of the paper is as follows: A.Vinciarelli, W.Riviera, F.Dalmasso, S.Raue, C.Abeyratna, “What Do Prospective Students Want? An Observational Study of Preferences About Subject of Study in Higher Education“, in “Innovations in Big Data Mining and Embedded Knowledge”, A.Esposito, A.M.Esposito, L.C.Jain (eds.), pp 83-97.

The article has been written in collaboration with Bizvento, a start-up founded by a few students of the School of Computing Science of our University, audit was supported by The Data Lab. It is the very first time I write a paper with a sociological slant (it was a chance for me to read some sociological literature about the relationship between family condition and education level that people attain). In parallel, it was an interesting exercise on how much information can be obtained by crossing multiple publicly available repositories and data.