I have been mentioned in a recent article posted in the blog of the “Scientific American”, where I have been asked to give my opinion about the role of context in the recognition of the emotions and, more in general, in the understanding of human behaviour:
The article provides a very extensive account of the available opinions about the matter and it confirms how elusive the concept of context is. Any persona that has been working in technology for at least a decade has certainly passed through successive waves of “context-aware” or “context-sensitive” approaches, but the meaning of the word context remains overall uncertain. The blog post of the Scientific American tries to shed some light by asking a large number of experts what they think about and the result is highly instructive. From my point of view, the most important aspects are the interplay between the analysis of context and the application of multimodal technologies on one side and, on the other side, the increasingly growing amount of information that can be gathered through the pervasive network of sensors and technologies that constellates out life.
On September 3rd, I have successfully co-chaired with Olga Perepelkina the first International Workshop on Social and Emotion AI for Industry, in collaboration with Neurodata Lab:
The workshop was organised in Cambridge, in conjunction with the IEEE International Conference on Affective Computing and Intelligent Interaction and, according to the organisers of the conference, it has attracted the largest number of participants among the different workshops organised in the same day.
The overall goal of the workshop was to gather academic researchers and industry practitioners active in social and emotion AI, the domains aimed at developing technologies that deal with affective and interactional aspects of their users. The program included refereed papers (50% acceptance rate) as well as keynote presentations given by speakers active in both academia and industry.
We learned about the use AI driven technologies that help people affected by different neurological diseases thanks to Emteq and its CEO Charles Nduka, about the effectiveness of AI technologies for the affective analysis of audio thanks to Audeering and its CEO Dagmar Schuller, and about the activities of Microsoft about empathic computing thanks to Daniel McDuff.
From an academic point of view, the workshop has benefited from the talks by Catherine Pelachaud (an introduction to impression management fore virtual agents), Antonio Camurri (an introduction to his new project on next generation motion capture systems) and Andrew McStay (an ethical perspective on AI).
Since I study nonverbal communication, I have always been fascinated by our innate tendency to leak information about ourselves through our behaviour. The new article “What and ‘Ehm’ leaks about you: Mapping Fillers into Personality Traits with Quantum Evolutionary Feature Selection Algorithms“, just accepted for publication by the IEEE Transactions on Affective Computing, is further step in such a direction. The work shows that one of our most minor behavioural cues, the short vocalisations like “ehm” and “uhm” that we utter when we do not know what to say next, conveys information about our personality traits. The results have been obtained by analysing around 3,000 fillers uttered by 120 persons.
The work has been done in collaboration with Mohammad Tayarani, former postdoc of mine that recently obtained a fellowship at the University of Hertfordshire, and Anna Esposito, one of my very first mentors and longtime friend and colleague. Mohammad has developed a new feature selection approach that allows one to spot the features (physical measurements we extract from fillers) most likely to account for personality. Anna has contributed with her deep knowledge of speech processing and her major experience in interdisciplinary work between Computing Science and Psychology. The IEEE Transactions on Affective Computing are in the top 5% of the Scimago Ranking and have an impact factor of 6.28.
I have published a new article that analyses the preferences of roughly 5,000 prospective students attending one of the Open Days organised by the University of Glasgow:
The full citation of the paper is as follows: A.Vinciarelli, W.Riviera, F.Dalmasso, S.Raue, C.Abeyratna, “What Do Prospective Students Want? An Observational Study of Preferences About Subject of Study in Higher Education“, in “Innovations in Big Data Mining and Embedded Knowledge”, A.Esposito, A.M.Esposito, L.C.Jain (eds.), pp 83-97.
The article has been written in collaboration with Bizvento, a start-up founded by a few students of the School of Computing Science of our University, audit was supported by The Data Lab. It is the very first time I write a paper with a sociological slant (it was a chance for me to read some sociological literature about the relationship between family condition and education level that people attain). In parallel, it was an interesting exercise on how much information can be obtained by crossing multiple publicly available repositories and data.
I am having the great pleasure of attending WIRN (Italian Neural Network Workshops) where I have been given the chance to give a keynote talk. The workshop gathers experts on Artificial Intelligence from all over Italy (and the rest of the world) to discuss about the latest advances in machine learning and similar technologies. For me it is a chance to have interesting discussions with the members of a vibrant community and, most important, to meet a lot of talented young researchers that do their first steps in the friendly environment that WIRN provides. This year, my talk has been on attachment and its relationship with Artificial Intelligence (the problem is to automatically diagnose the attachment condition of a child) and Human Computer Interaction (the problem is to find traces of the attachment condition in the way children interact with software).
According to Arnet Miner, the search engine supported by the Chinese Government, I am one of the top 100 researchers in Multimedia in the decade 2007-2017. According to the message they sent me (see below), this results from the analysis of 230 millions of documents collected over 368,402 venues.
Dear Alessandro Vinciarelli,
We are pleased to inform you that you have been recognized as a Most Influential Scholar for your outstanding and vibrant contributions to the field of Multimedia. Congratulations!
In 2018, the AMiner Most Influential Scholar List names the world’s top-cited research scholars from the fields of Artificial Intelligence (AI). The list is conferred in recognition of outstanding technical achievements with lasting contribution and impact to the research community. The 2018 winners are among the most-cited scholars from the top venues of their respective subject fields in recent ten years (between 2007 and 2017). Recipients are automatically determined by a computer algorithm deployed in the AMiner system that tracks and ranks scholars based on citation counts collected by top-venue publications.
AMiner (https://aminer.org) is a free online service for academic social network analysis and mining. As of 2018, the system has collected information on over 136 million researchers, 230 million publication papers, and 368,402 venues. The system has been in operation on the Internet since 2006 and has been visited by nearly 8.32 million independent IP accesses. It provides various search/mining services for publishers, NSFC, and research venues such as ACM/IEEE Transactions, ACM SIGKDD, ACM WSDM, and IEEE ICDM. Further details can be found online at the AMiner Wikipedia page: https://en.wikipedia.org/wiki/Arnetminer.
As part of the recognition, your research profile extracted from the AMiner database is being featured this month (March, 2019) on AMiner homepage. The full list of the most influential scholars can be found here: https://www.aminer.cn/ai10/multimedia. For your information, you can sign up for an AMiner account and keep your personal profile and publications updated (https://aminer.org/profile/5405e3cbdabfae450f3de5dd).
I have been interviewed by The Telegraph (https://www.telegraph.co.uk) about the findings of the Office for National Statistics according to which the number of works at risk of being replaced by AI and technology is decreasing with respect to 2011:
It has been a good chance to state publicly that we are probably at the end of the hype cycle and it is now time to be more reasonable about the expectations we develop about the potential of AI and related technologies. Here are a few excerpts from the interview where I have been quoted more or less literally:
- “When something like technology becomes fashionable, there’s a rise in major expectations, we reach a peak and then it comes back down to a more realistic expectation,”
- “There have been major advancements in technology which have allowed us to do a lot of very good things but not as much as we were promised, so now people understand that all the promises about robots going to take over are not going to happen.”
It is maybe against my interest to be honest about the technologies I investigate, but at the end of the day my job is to serve that truth and not to serve my interests.
The magazine “The Week” has published an article featuring my comments about the latest release of the office for the statistics according to which the number of jobs at risk of replacement with AI technologies is decreasing:
While my opinion is that the number of jobs considered at risk is decreasing because the expectations about AI performance are being revised to become more realistic, the magazine considers that the numbers might be decreasing because a lot of jobs have already been replaced. Probably, the truth is in the middle.
According to its publisher, the online version of the magazine reaches 2.1 million persons per month in the UK.
I have been invited to join the Editorial Board of the IEEE Transactions on Affective Computing, the most important publication venue for any researcher investigating technologies dealing with social and affective signals:
After publishing a large number of papers on the journal and benefitting from the great work of many Associate Editors, it is my turn to contribute with the difficult role of discriminating between the works that deserve publication and the others. The impact factor of the journal is 4.58, a value that accounts for the its reputation in the scientific community. With my great pleasure, I have a lot of very good friends among the other members of the Editorial Board.
Thanks to the great work of Anna Esposito, I have the pleasure to join the organising committee of the “Special Session on Dynamics of Emotional Speech Exchanges in Multimodal Communication“, to be held at Interspeech 2019:
The topics covered in the special session can be described as follows: “Research devoted to understanding the relationship between verbal and nonverbal communication modes, and investigating the perceptual and cognitive processes involved in the coding/decoding of emotional states is particularly relevant in the fields of Human-Human and Human-Computer Interaction.“
The special session has been possible thanks to the H2020 funded project “Empathic” (http://www.empathic-project.eu/).