Nick Heath from TechRepublic has interviewed me about the challenges of automatic face recognition in real-world settings. I pointed out at the difficulties of training AI-driven systems when it is not possible to know in advance what kind of variability will be encountered in operational conditions. Interestingly enough, Chris Bishop, interviewed in the same article, seems to have focused on exactly the same issue. During the interview, I noticed the usual feelings that journalists have when they learn that AI is not an almighty technology and, in many cases, we are far from the possibility to reliably use it in everyday conditions.
Here is the excerpt of the interview where I am quoted:
In facial-recognition systems, accuracy can suffer when the images the system has been trained on aren’t sufficiently varied — in terms of factors like the individuals’ pose, lighting, shadows, obstructions, glasses, facial hair, and the resolution of the image.
“The learning process allows the machine to be robust to the variability that is well represented in the training material, but not to the variability that is not represented,” said Alessandro Vinciarelli, professor in the school of computing science at the University of Glasgow.