Facial recognition is here: The iPhone X is just the beginning

Add bookmark
Clare  Garvie
Clare Garvie
09/28/2017

With news that Apple's latest smartphone will use facial recognition technology, now is not the time to become complacent about privacy, writes Clare Garvie

facial recognition hero
Photo by Soroush Karimi on Unsplash

I have a confession to make. I’m a privacy lawyer who researches the risks of face recognition technology—and I will be buying the new iPhone.

Apple’s next generation smartphone will use face recognition, thanks to infrared and 3D sensors within its front-facing camera. Reports indicate that the face scan and unlock system will be almost instantaneous and require no buttons to be pressed, being always “on” and ready to read your face. Android users can expect similar face unlock features as well.

For the millions of people who will soon depend on face recognition to check their email, send a text, or make a call, it will be quick, easy to use, and, yes, pretty cool. But as we grow accustomed to fast and accurate face recognition, we cannot become complacent to the serious privacy risks it often poses—or think that all its applications are alike.

Read more: Inside China's AI boom 

Face recognition is already used around the world to examine, investigate, and monitor. In China, police use face recognition to identify and publicly shame people for the crime of jaywalking. In Russia, face recognition has been used to identify anti-corruption protesters, exposing them to intimidation or worse.

In the UK, face recognition was used at an annual West Indian cultural festival to identify revelers in real-time. In the United States, more than half of all American adults are in a face recognition database that can be used for criminal investigations, simply because they have a driver’s license.

Governments are not the only users of face recognition. Retailers use the technology in their stores to identify suspected shoplifters. Social media applications increasingly integrate face recognition into their user experience; one application in Russia allows strangers to find out who you are just by taking your photo.

The more variables in the photo taken—camera distance and angle, lighting, facial pose, photo resolution—the lower the accuracy will be.  

Different uses of face recognition produce different levels of accuracy, too. The iPhone’s face recognition system may reliably ID us, while also rejecting strangers and nosy friends. We cannot assume the same from other systems.

The more variables in the photo taken—camera distance and angle, lighting, facial pose, photo resolution—the lower the accuracy will be. As a consequence, when used for surveillance, face recognition will perform far less accurately, and make more mistakes, than when used to unlock a smartphone. 

Law enforcement systems compensate for this by lowering the match threshold—how much two people have to look alike to be considered a “match”. At a recent biometrics industry conference, one company demonstrated its real-time face recognition surveillance solution, advertised as being able to pick a suspect out of a crowd, by scanning the face of everyone walking by its cameras. 

The system was designed so that people who looked 50 per cent or more similar to the wanted suspect were flagged as a possible match. This means that a vast number of “possible matches” will be completely innocent people. These are the face recognition systems where a mistake could mean you are investigated, if not arrested and charged, for a crime you didn’t commit.

At the festival in London late last month, the real-time face recognition system reportedly led to 35 misidentifications and only one “correct” match—an innocent person who was not wanted by the police after all. Officials at the New York Police Department have acknowledged at least five misidentifications by their face recognition system. 

facial recognition hero 2
Photo by Thomas William on Unsplash

If the iPhone’s new system makes this many mistakes, or unlocks for anyone who looks 50 per cent or more similar to the owner, no one will consider it to be acceptable security for our personal information. Fortunately, it almost surely won’t. But for many of the systems we are already subjected to, the deployments of face recognition where we had no choice over whether or not to opt-in, mistakes and misidentifications are unavoidable. 

As the smartphone of record, it is almost inevitable that the iPhone’s inclusion of this technology will pave the way for consumers to accept face recognition use elsewhere, even if these other systems are inaccurate, privacy invasive, or otherwise problematic. It’s easy to be comfortable with what’s familiar. 

Think about all the places you show your face each day. Should retailers, law enforcement officers, or strangers have the ability to capture your photo, turn it into a biometric, and use it to identify, track, or surveil you? Are these uses of face recognition worth the erosion of our privacy and the persistent risks of misidentification? 

Even as we choose to explore the conveniences face recognition can offer, we must be vigilant to its risks. I will, like many people in the coming months, opt in to a face scan every time I want to check the weather, draft a tweet, or make a rare actual phone call.

But even as we embrace its conveniences, we should remain suspicious of the many ways face recognition is used today. Face recognition may well be inevitable. Its risks shouldn’t have to be.

Clare Garvie is an associate with the Center on Privacy & Technology at Georgetown Law and co-author of The Perpetual Line-Up: Unregulated Police Face Recognition in America.

This article was orginally published on The Guardian and is reproduced with permission from the author.


RECOMMENDED