Critics are wrong to slam iPhone X's new face tech
More like this
- iPhone X & Face ID: Everything you need to know
- Apple's iPhone X proves it: Silicon Valley is getting emotional
- What is Face ID? Appleâs new facial recognition tech explained
- Video Mingis on Tech: Getting the lowdown on the Pixel 2 XL
Appleâs new iPhone X reads faces. And privacy pundits are gnashing their teeth over it.
The phoneâs complex TrueDepth image system includes an infrared projector, which casts 30,000 invisible dots, and an infrared camera, which checks where in three-dimensional space those dots land. With a face in view, artificial intelligence on the phone figures out whatâs going on with that face by processing locations of the dots.
Biometrics in general and face recognition in particular are touchy subjects among privacy campaigners. Unlike a password, you can ât change your fingerprints â" or face.
Out of the box, the iPhone Xâs face-reading system does three jobs: Face ID (security access), Animoji (avatars that mimic usersâ facial expressions), and also something you might call âeye contact,â to figure out if the user is looking at the phone (to prevent sleep mode during active use).
A.I. looks at the iPhone Xâs projected infrared dots and, depending on the circumstances, can check: Is this the authorized user? Is the user smiling? Is the user looking at the phone?
Privacy advocates rightly applaud Apple because Face ID happens securely on the phone â" face data isnât uploaded to the cloud where it could be hacked and used for other purposes. And Animoji and âeye contactâ donât involve face recognition.
Criticism is reserved for Appleâs policy of granting face-data access to third-party developers, according to a Reuters piece published this week.
That data roughly includes where pa rts of the face are (the eyes, mouth, etc.), as well as rough changes in the state of those parts (eyebrows raised, eyes closed and others). Developers can program apps to use this data in real time, and also store the data on remote servers.
The controversy raises a new question in the world of biometric security: Does facial expression and movement constitute user data or personal information that should be protected in the same way that, say, location data or financial records should be?
Iâll give you my answer below. But first, hereâs why it really matters.
The coming age of face recognition
The rise of machine learning and A.I. means that over time, face recognition, which is already very accurate, will become close to perfect. As a result, it will be used everywhere, possibly replacing passwords, fingerprints and even driverâs licenses and passports for how we determine or verify whoâs who.
Thatâs why itâs important that we start rejecting muddy thinking about face-detection technologies, and instead learn to think clearly about them.
Hereâs how to think clearly about face tech.
Face recognition is one way to identify exactly who somebody is.
As I detailed in this space, face recognition is potentially dangerous because people can be recognized at far distances and also online through posted photographs. Thatâs a potentially privacy-violating combination: Take a picture of someone in public from 50 yards away, then run that photo through online face-recognition services to find out who they are and get their home address, phone number and a list of their relatives. It takes a couple of minutes, and anybody can do it free. This already exists.
Major Silicon Valley companies such as Facebook and Google routinely scan the faces in hundreds of billions of photos and allow any user to identify or âtagâ family and friends without permission of the person tagged.
In general, people should be far more concerned about face-recognition technologies than any other kind.
Itâs important to understand that other technologies, processes or applications are almost always used in tandem with face recognition. And this is also true of Appleâs iPhone X.
For example, Face ID wonât unlock an iPhone unless the userâs eyes are open. Thatâs not because the system canât recognize a person whose eyes are closed. It can. The reason is that A.I. capable of figuring out whether eyes are open or closed is separate from the system that matches the face of the authorized user with the face of the current user. Apple deliberately chose to disable Face ID unlocking when the eyes are closed to prevent unauthorized phone unlocking by somebody holding the phone in front of a sleeping or unconscious authorized user.
Apple also uses this eye detector to prevent sleep mode on the phone during active use, and that feature has nothing to do with recognizing the user (it will work for anyone using the phone).
In other words, the ability to authorize a user and the ability to know whether a personâs eyes are open are completely separate and unrelated abilities that use the same hardware.
Which brings us back to the point of controversy: Is Apple allowing app developers to violate user privacy by sharing face data?
Critics lament Appleâs policy of enabling third-party developers to receive face data harvested by the TrueDepth image sensors. They can gain that access in apps by using Appleâs ARKit, and the specific new face-related tools therein.
The tools allow the building of apps that can know the position of the face, the direction of the lighting on the face and also facial expression.
The purpose of this policy is to allow developers to create apps that can place goofy glasses (or fashionable glasses to try on at an online eyewear storeâs we bsite), or any number of other apps that can react to head motion and facial expression. Characters in multiplayer games will appear to frown, smile and talk in an instant reflection of the playersâ actual facial activity. Smiling while texting may result in the option to post a smiley face emoji.
Appleâs policies are restrictive. App developers canât use the face features without user permission, nor can they use them for advertising, marketing or making sales to third-party companies. They canât use face data to create user profiles that could identify otherwise anonymous users.
The facial expression data is pretty crude. It canât tell apps what the person looks like. For example, it canât tell the relative size and position of resting facial features such as eyes, eyebrows, noses and mouths. It can, however tell changes in position. For example, if both eyebrows rise, it can send a crude, binary indication that, yes, both eyebrows went up.
The questi on to be answered here is: Does a change in the elevation of eyebrows constitute personal user data? For example, if an app developer leaks the fact that on Nov. 4, 2017, Mike Elgan raised his left eyebrow, has my privacy been violated? What if they added that the eyebrow raising was associated with a news headline I just read or a tweet by a politician?
That sounds like the beginning of a privacy violation. Thereâs just one problem. They canât really know itâs me â" they just know that someone who claimed to have my name registered for their app, then later that a human face raised an eyebrow. I might have handed my phone to a nearby 5-year-old, for all they know. Also, they donât know what the eyebrow was reacting to. Was it something on screen? Or maybe somebody in the room said something to elicit that reaction.
The eyebrow data is not only useless, itâs also unassociated with both an individual person and the source of the reaction. Oh, and itâs boring. Nobody would care. Itâs junk data for anyone interested in profiling or exploiting the public.
Technopanic about leaked eyebrow-raising obscures the real threat of privacy violation by irresponsible or malicious face recognition.
Thatâs why I come not to bury Apple, but to praise it.
Turn that frown upside down
Face recognition will prove massively useful and convenient for corporate security. The most obvious use is replacing keycard door access with face recognition. Instead of swiping a card, just saunter right in with even better security (keycards can be stolen and spoofed).
This security can be extended to vehicles, machinery and mobile devices as well as to individual apps or specific corporate datasets.
Best of all, the face recognition can be accompanied by peripheral A.I. applications that make it really work. For example, is a second, unauthorized person trying to come in when the door opens? Is the user under du ress? Under the influence of drugs, or falling asleep?
I believe great, secure face recognition could be one answer to the BYOD security problem, which still hasnât been solved. Someday soon enterprises could forget about authorizing devices, and instead authorize users on an extremely granular basis (down to individual documents and applications).
Face recognition will benefit everyone, if done right. Or it will contribute to a world without privacy, if done wrong.
Apple is doing it right.
Appleâs approach is to radically separate the parts of face scanning. Face ID deals not in âpictures,â but in math. The face scan generates numbers, which are crunched by A.I. to determine whether the person now facing the camera is the same person who registered with Face ID. Thatâs all it does.
The scanning, the generation of numbers, the A.I. for judging whether thereâs a match and all the rest all happens on the phone itself, and the data is encrypted and locked on the phone.
Itâs not necessary to trust that Apple would prevent a government or hacker from using Face ID to identify a suspect or dissident or target. Apple is simply unable to do that.
Meanwhile, the features that allow changes in facial expression and whether the eyes are open are super useful, and users can enjoy apps that implement these features without fear of privacy violation.
Instead of slamming Apple for its new face tech, privacy advocates should be raising awareness about the risks we face with irresponsible face recognition.Windows Hello for Business: Next-gen authentication for Windows shops Source: Google News