Interview for The Telegraph

I have been interviewed by The Telegraph (https://www.telegraph.co.uk) about the findings of the Office for National Statistics according to which the number of works at risk of being replaced by AI and technology is decreasing with respect to 2011:

https://www.telegraph.co.uk/news/2019/03/25/rise-robots-fears-overestimated-new-employment-data-reveals/

It has been a good chance to state publicly that we are probably at the end of the hype cycle and it is now time to be more reasonable about the expectations we develop about the potential of AI and related technologies. Here are a few excerpts from the interview where I have been quoted more or less literally:

  • “When something like technology becomes fashionable, there’s a rise in major expectations, we reach a peak and then it comes back down to a more realistic expectation,”
  • “There have been major advancements in technology which have allowed us to do a lot of very good things but not as much as we were promised, so now people understand that all the promises about robots going to take over are not going to happen.”

It is maybe against my interest to be honest about the technologies I investigate, but at the end of the day my job is to serve that truth and not to serve my interests.

Interview for “The Week”

The magazine “The Week” has published an article featuring my comments about the latest release of the office for the statistics according to which the number of jobs at risk of replacement with AI technologies is decreasing:

https://www.theweek.co.uk/100408/automation-could-replace-15-million-uk-jobs

While my opinion is that the number of jobs considered at risk is decreasing because the expectations about AI performance are being revised to become more realistic, the magazine considers that the numbers might be decreasing because a lot of jobs have already been replaced. Probably, the truth is in the middle.

According to its publisher, the online version of the magazine reaches 2.1 million persons per month in the UK.

 

Joining the Editorial Board of the IEEE Transactions on Affective Computing

I have been invited to join the Editorial Board of the IEEE Transactions on Affective Computing, the most important publication venue for any researcher investigating technologies dealing with social and affective signals:

https://www.computer.org/csdl/journal/ta/misc/14383?title=About&periodical=IEEE%20Transactions%20on%20Affective%20Computing

After publishing a large number of papers on the journal and benefitting from the great work of many Associate Editors, it is my turn to contribute with the difficult role of discriminating between the works that deserve publication and the others. The impact factor of the journal is 4.58, a value that accounts for the its reputation in the scientific community. With my great pleasure, I have a lot of very good friends among the other members of the Editorial Board.

Special Session at Interspeech 2019

Thanks to the great work of Anna Esposito, I have the pleasure to join the organising committee of the “Special Session on Dynamics of Emotional Speech Exchanges in Multimodal Communication“, to be held at Interspeech 2019:

https://www.interspeech2019.org/program/special_sessions_and_challenges/

The topics covered in the special session can be described as follows: “Research devoted to understanding the relationship between verbal and nonverbal communication modes, and investigating the perceptual and cognitive processes involved in the coding/decoding of emotional states is particularly relevant in the fields of Human-Human and Human-Computer Interaction.

The special session has been possible thanks to the H2020 funded project “Empathic” (http://www.empathic-project.eu/).

Appearance in “Forbes”

The business magazine Forbes features an article about the 16 Centres for Doctoral Training announced by UKRI on February 21st:

https://www.forbes.com/sites/samshead/2019/02/20/uk-government-to-fund-ai-university-courses-with-115m/#4fdc239c430d

The article explains that the UK government aims at keeping the pace with the USA and China in the AI race: “AI is poised to become the most significant technology for a generation but there are only so many people that know how to develop the technology, which could have a huge impact on industries such as healthcare, energy, and automotive.”

 

New Centre for Doctoral Training

I have been awarded one of the 16 UKRI Centres for Doctoral Training in Artificial Intelligence:

https://www.ukri.org/news/200m-to-create-a-new-generation-of-artificial-intelligence-leaders/

It will be for me the major opportunity to collaborate with 30 world leading colleagues and 15 major industrial partners for the training of 50 PhD students. We will investigate all together the nature of social intelligence in humans and machines. The project takes place at the University of Glasgow and it involves the School of Computing Science, the School of Psychology and the Institute of Neuroscience and Psychology.

 

Interview for Voices in AI

I have been interviewed for Voices in AI, a series of conversations between Byron Reese and experts in Artificial Intelligence:

https://voicesinai.com/episode/episode-78-a-conversation-with-alessandro-vinciarelli/

The interview has focused on the interplay between human psychology and machine intelligence and, in particular, on how machines can learn how to “read the mind” of their users. After outlining the main applications (and the many emerging companies active in the area), the attention has shifted to the significant ethical issues underlying the development of these technologies. The main point we have made is that the danger does not come from technologies, but from people. Therefore, it is through societal choices and political regulation that socially intelligent Artificial Intelligence will be of benefit for people. Many thanks to Neurodata Lab for having created the opportunity of this interview.

 

New Article on Speech Perception

My article “Machine-Based Decoding of Human Voices and Speech” has been published in “The Oxford Handbook of Voice Perception“, edited by S.Fruholz and P.Belin. The chapter provides a general introduction to the main approaches aimed at speech recognition and inference of speech-based social perceptions. After showing that our very physiology is shaped around the perception of human voices, the chapter shows that speech is probably the signal most common studied and analysed in the technological literature. Furthermore, the chapter introduces the main approaches adopted to automatically transcribe speech signals (a task called speech recognition) and to infer from them different types of traits and psychological phenomena (personality, emotions, etc.).

Article Accepted at CHI 2019

The article “Automating the Administration and Analysis of Psychiatric Tests: The Case of Attachment in School Age Children” (G.Roffo, D.B.Vo, A,Sorrentino, M.Rooksby, M.Tayarani, H.Minnis, S.Brewster and A.Vinciarelli) has been accepted for presentation at the next ACM CHI Conference on Human Factors in Computing Systems (CHI 2019). The abstract of the article is as follows:

This article presents an interactive system aimed at administering, without the supervision of professional personnel, the Manchester Child Attachment Story Task (a psychiatric test for the assessment of attachment in children). The main goal of the system is to collect, through an interaction process, enough information to allow a human assessor to manually identify the attachment of children. The experiments show that the system successfully performs such a task in 87.5% of the cases (105 of the 120 children involved in the study). In addition, the experiments show that an automatic approach based on deep neural networks can map the information that the system collects, the same that is provided to the human assessors, into the attachment condition of the children. The outcome of the system matches the judgment of the human assessors in 82.8% of the cases (87 of the 105 children for which the system has successfully administered the test). To the best of our knowledge, this is the first time an automated tool has been successful in measuring attachment. This work has significant implications for psychiatry as it allows professionals to assess many more children and direct healthcare resources more accurately and efficiently to improve mental health.

Multimodality Course at University of Fribourg

I had the chance to give a course on multimodality at the University of Fribourg (Switzerland) in the framework of the Certificate of Advanced Studies in Interaction Science and Technology:

http://human-ist.unifr.ch/cas/courses/social-signal-and-multimodal-processing

It has been an intensive day during which i have been teaching for six hours to a highly interactive class that has posed a large number of interesting questions. After introducing the concept of multimodality in disciplines like psychology and biology, I have shown how Artificial Intelligence deals with the phenomenon. In particular, I have shown how early and late fusion (the two basic methodologies for the development of multimodal approaches) can be thought of as modifications of Bayes Decision Theory. To complete the day, I have show how such methodologies have been applied in two interesting problems, namely the analysis of attachment in children and the inference of personality traits from simple vocalisations like “ehm” or “uhm”.