Masking the Algorithm

Masking the Algorithm

How face masks are posing challenges to facial recognition, and what can be done

Facial recognition algorithms are facing a tough challenge during Covid-19 when everyone is forced to wear face masks. This is breaking the recognition algorithms, quite like the way the facial screen locking systems of our mobile phones refuses to unlock when we have the mask on. Some companies are reported to have updated their algorithms by photoshopping masks onto images from existing datasets, but these could lead to significant errors.

Facial recognition has become more widespread and accurate in recent years, as an Artificial Intelligence technology called deep learning made computers much better at interpreting images. Governments and private companies use facial recognition to identify people at workplaces, schools, and airports, among other places.

The US National Institute of Standards and Technology (NIST) has reported that face masks which cover the mouth and nose are causing error rates of between 5% to 50% in facial recognition algorithms. They also discovered that black masks were more likely to cause errors than blue ones; and the more of the nose was covered by the mask, the harder it was for the algorithms to identify the face. As all facial recognition algorithms were developed in the pre-COVID period, there is now a need to recode the programs with masks on.

Facial recognition algorithms, such as those tested by NIST, work by measuring the distances between features in a target’s face. Masks reduce the accuracy of these algorithms by removing most of these features, although some still remain. This is slightly different to how facial recognition works on iPhones, for example, which use depth sensors for extra security – ensuring that such algorithms can’t be fooled by showing the camera a flat 2D picture (a danger that is not present in the scenarios NIST is concerned with).

Although there is plenty of anecdotal evidence about face masks thwarting facial recognition, this study from NIST is particularly definitive. NIST is the government agency assigned with the task of assessing the accuracy of these algorithms (along with many other systems) for the US Federal government, and its ranking of different vendors is extremely influential.

Notably, NIST’s report only verified a single type of facial recognition known as one-to-one matching. This is the procedure used in border crossings and passport control scenarios, where the algorithm checks to see if the target’s face matches their ID. This is different from the kind of facial recognition systems used in mass surveillance, where a crowd is scanned to find matches with faces in a database. That is called a one-to-many system.

Although NIST’s report does not cover one-to-many systems, these are generally considered more error-prone than one-to-one algorithms. Picking out faces in a crowd is harder because you can’t control the angle or lighting on the face and the resolution is generally reduced. This suggests that if face masks are breaking one-to-one systems, they’re likely breaking one-to-many algorithms with at least the same, but probably greater, frequency.

Additionally, this is congruent with reports that have been doing the rounds inside the US government corridors, according to media reports. An internal bulletin from the US Department of Homeland Security earlier this year said the agency was concerned about the “potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems.”

For privacy advocates, this will be welcome news. Many have warned about the rush by governments around the world to embrace facial recognition systems, despite the chilling effects such technology has on civil liberties, and the widely-recognized racial and gender biases of these systems, which tend to perform worse on anyone who is not a white male.

Meanwhile, companies that develop and sell facial recognition technology have been rapidly adapting to this new world. They have taken to designing algorithms that identify faces only by focusing on the area around the eyes. Leading vendors, like the Russian firm NtechLab which has deployed some 150,000 cameras in Moscow, say that these new algorithms can identify faces even if they were hidden behind a balaclava – a close-fitting overall head-and-face gear that usually leaves just the eyes, the nostrils and the lips exposed. Such claims are yet to be objectively verified, though. These are mostly internal reports and often used as showcase pieces to demonstrate flattering results.

NIST plans to test specially tuned facial recognition algorithms for mask wearers later this year, and also intends to verify how efficient the one-to-many systems could be. Despite the problems caused by masks, the agency expects that technology will persevere. And so we hope; because in a post-pandemic world, face masks are here to stay.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us
Praxis Tech School
PGP in Data Science