I am very happy to participate in Book Week Scotland, the event organised by the Scottish Book Trust that involves readers and public libraries. I have been asked to prepare a short video about Artificial Intelligence that I entitled “AI: past, present and future”, where I have tried to explain what Artificial Intelligence is through a brief history of the domain.
The details of the initiative are available at the following link:
With my great pleasure, I have built the content of the video upon a few books that I have read in the last years and that have helped me to shape my very own ideas about AI and its development. I am looking forward to receive questions and comments from the audience, hoping that the video will one of help for the organisers of this important event.
The tutorial on Social AI organised in collaboration with Monika Harvey and Stacy Marsella has been a great success. Despite being held on a Saturday afternoon, the tutorial was attended by 50 persons that have participated with great questions and comments. The website of the tutorial is here:
The three presentations have covered the approaches aimed at the three main dimensions of a socially intelligent artificial agent, namely perception, cognition and action. The tutorial was organised in conjunction with Intelligent Virtual Agents, one of the main conferences on the synthesis of human behaviour.
We have published another paper about depression for Cognitive Computation:
N.Aloshban, A.Esposito and A.Vinciarelli, “What You Say Or How You Say It? Depression Detection Through Joint Modelling of Linguistic and Acoustic Aspects of Speech“, accepted for publication by Cognitive Computation, to appear 2020.
The key-point of the paper is that depression and control participants tend to. manifest their condition in different ways. In particular, while depressed individuals tend to show their pathology through nonverbal behaviour (how they speak), control ones tend to do it through their lexical choices (what they say). To the best of our knowledge, this is the first time that such an observation has been made in the literature.
Together with researchers (Prof David Bodoff from Haifa University) and entrepreneurs (Olivia Gambelin from Ethical AI and Chuan Hiang from Interaktiv) we have discussed the role of ethics in AI. After our presentations, we had the great opportunity to participate in a radio show on Money FM with the moderation of Rachel Kelly. A recording of the event is available at the link above.
I have introduced the general aspects of Social Signal Processing and Social AI, in particular when it comes to the analysis of nonverbal behaviour in human-human and human-machine interactions. I have used the analysis of conflict as a particular example of how AI-driven technologies can be used to understand social and psychological phenomena. The lecture was attended by roughly 300 people.
Both articles revolve around mental health issues and, in particular, around the possibility to infer the mental conditions of people from their observable behaviour. In the case of depression, the approach takes into account what people say and how they say it. The focus is on the possibility to detect depression using only a few seconds of speech, something that it is important because depressed people find it difficult to speak for long time. In the case of attachment, the proposed approach analyses the way children use an interactive system and the results show that secure children tend to better interact with the design of the system, in line with the expectations of Attachment Theory.
It is a great pleasure to say that the publications have been obtained with the help of one of my most senior PhD students (Nujud Aloshban) and an experienced postdoctoral researchers with whom I have been collaborating for 5 years.
The most interesting aspect of the article is that it uses a classifier to show that three particular behavioural cues (reading speed, number of pauses and average length of the pauses) can improve by close to 20 points the accuracy of classifier that discriminates between depressed and non-depressed readers. In other words, the paper proposes the use of a classifier as a means to test the association between depression and behavioural cues, in alternative to the common approaches used in psychology for the same purpose (e.g., correlational analysis). In addition, the paper shows that neuroscience findings, in particular the interplay between depression and neural mechanisms underlying language processing, can help to improve technology.
The article proposes an approach for the analysis of the way people gather in public spaces that takes into account not only physical, but also social aspects of interpersonal distance. The work is a nice example of how multiple disciplines (Computer Vision and Social Signal Processing in this case) can collaborate to address a technological problem.
I had the great pleasure to participate as a speaker in the Boundless Podcast, an initiative led by Richard Foster Fletcher to spread awareness about Artificial Intelligence and its latest developments. The discussion revolved around socially intelligent agents and, in particular, around their potential as a means to improve the life of people in the world.
The full recording of my interview is available at the following link:
I have been mentioned in a recent article posted in the blog of the “Scientific American”, where I have been asked to give my opinion about the role of context in the recognition of the emotions and, more in general, in the understanding of human behaviour:
The article provides a very extensive account of the available opinions about the matter and it confirms how elusive the concept of context is. Any persona that has been working in technology for at least a decade has certainly passed through successive waves of “context-aware” or “context-sensitive” approaches, but the meaning of the word context remains overall uncertain. The blog post of the Scientific American tries to shed some light by asking a large number of experts what they think about and the result is highly instructive. From my point of view, the most important aspects are the interplay between the analysis of context and the application of multimodal technologies on one side and, on the other side, the increasingly growing amount of information that can be gathered through the pervasive network of sensors and technologies that constellates out life.