Interview for “The Week”

The magazine “The Week” has published an article featuring my comments about the latest release of the office for the statistics according to which the number of jobs at risk of replacement with AI technologies is decreasing:

https://www.theweek.co.uk/100408/automation-could-replace-15-million-uk-jobs

While my opinion is that the number of jobs considered at risk is decreasing because the expectations about AI performance are being revised to become more realistic, the magazine considers that the numbers might be decreasing because a lot of jobs have already been replaced. Probably, the truth is in the middle.

According to its publisher, the online version of the magazine reaches 2.1 million persons per month in the UK.

 

Joining the Editorial Board of the IEEE Transactions on Affective Computing

I have been invited to join the Editorial Board of the IEEE Transactions on Affective Computing, the most important publication venue for any researcher investigating technologies dealing with social and affective signals:

https://www.computer.org/csdl/journal/ta/misc/14383?title=About&periodical=IEEE%20Transactions%20on%20Affective%20Computing

After publishing a large number of papers on the journal and benefitting from the great work of many Associate Editors, it is my turn to contribute with the difficult role of discriminating between the works that deserve publication and the others. The impact factor of the journal is 4.58, a value that accounts for the its reputation in the scientific community. With my great pleasure, I have a lot of very good friends among the other members of the Editorial Board.

Special Session at Interspeech 2019

Thanks to the great work of Anna Esposito, I have the pleasure to join the organising committee of the “Special Session on Dynamics of Emotional Speech Exchanges in Multimodal Communication“, to be held at Interspeech 2019:

https://www.interspeech2019.org/program/special_sessions_and_challenges/

The topics covered in the special session can be described as follows: “Research devoted to understanding the relationship between verbal and nonverbal communication modes, and investigating the perceptual and cognitive processes involved in the coding/decoding of emotional states is particularly relevant in the fields of Human-Human and Human-Computer Interaction.

The special session has been possible thanks to the H2020 funded project “Empathic” (http://www.empathic-project.eu/).

Appearance in “Forbes”

The business magazine Forbes features an article about the 16 Centres for Doctoral Training announced by UKRI on February 21st:

https://www.forbes.com/sites/samshead/2019/02/20/uk-government-to-fund-ai-university-courses-with-115m/#4fdc239c430d

The article explains that the UK government aims at keeping the pace with the USA and China in the AI race: “AI is poised to become the most significant technology forĀ a generation but there are only so many people that know how to develop the technology, which could have a huge impact on industries such as healthcare, energy, and automotive.”

 

New Centre for Doctoral Training

I have been awarded one of the 16 UKRI Centres for Doctoral Training in Artificial Intelligence:

https://www.ukri.org/news/200m-to-create-a-new-generation-of-artificial-intelligence-leaders/

It will be for me the major opportunity to collaborate with 30 world leading colleagues and 15 major industrial partners for the training of 50 PhD students. We will investigate all together the nature of social intelligence in humans and machines. The project takes place at the University of Glasgow and it involves the School of Computing Science, the School of Psychology and the Institute of Neuroscience and Psychology.

 

Interview for Voices in AI

I have been interviewed for Voices in AI, a series of conversations between Byron Reese and experts in Artificial Intelligence:

https://voicesinai.com/episode/episode-78-a-conversation-with-alessandro-vinciarelli/

The interview has focused on the interplay between human psychology and machine intelligence and, in particular, on how machines can learn how to “read the mind” of their users. After outlining the main applications (and the many emerging companies active in the area), the attention has shifted to the significant ethical issues underlying the development of these technologies. The main point we have made is that the danger does not come from technologies, but from people. Therefore, it is through societal choices and political regulation that socially intelligent Artificial Intelligence will be of benefit for people. Many thanks to Neurodata Lab for having created the opportunity of this interview.

 

New Article on Speech Perception

My article “Machine-Based Decoding of Human Voices and Speech” has been published in “The Oxford Handbook of Voice Perception“, edited by S.Fruholz and P.Belin. The chapter provides a general introduction to the main approaches aimed at speech recognition and inference of speech-based social perceptions. After showing that our very physiology is shaped around the perception of human voices, the chapter shows that speech is probably the signal most common studied and analysed in the technological literature. Furthermore, the chapter introduces the main approaches adopted to automatically transcribe speech signals (a task called speech recognition) and to infer from them different types of traits and psychological phenomena (personality, emotions, etc.).