Even the best facial recognition systems can be fooled by family members. Can that bug become a feature?
Facebook
recently, informing him that it had detected
his face in a number of photos and asking if he would like to be tagged in
them. When he clicked through the link, he was amused to find eleven black and
white headshots of his mother from her college days. The social network’s DeepFace algorithms
detect and identify faces using a neural network with more than 120 million
parameters, an ever-improving machine learning process the company says is
closely approaching human-level performance. Facebook says DeepFace can
determine if two photographs are of the same person 97.25 percent of the time,
but there are still plenty of images that can trip up their most powerful
machines, especially when it’s a family affair.
"We all have genetic traits
that are translated into how we look. If a person closely resembles one of
their parents, a photo of the young mother or father could correlate and
confuse the algorithm," says David Tunnell, the chief technology officer
at NXT-ID,
which specializes in three-dimensional facial recognition. In Facebook's case,
confusing two family members was a bug, but one that points toward other
potential uses for facial recognition, as the process also works in reverse.
"We have done research that shows you can use facial recognition to
identify a person’s ethnicity, region of origin, and family affiliation,"
Tunnell says. With good data from my parents, siblings, or cousins, he says it
might be possible to identify me even if the system had no actual images of me
to work from.
The Verge. "Here was a real life
example of a false positive (when an algorithm incorrectly predicts an
observation as being something its not — in this case it thought my mom was me)
with personal implications." Benenson, who is a data scientist at
Kickstarter, says he generally isn’t too troubled by the idea that photographs
of our faces are constantly being scanned and analyzed for personal identification.
"In general I think people tend to overstate the nefarious things that
Facebook is doing with this kind of technology, but it certainly opens the door
to larger and more difficult questions."
When it
comes to more serious applications, for example the way the Chicago Police are
now using facial recognition, Benenson sees the potential for trouble.
"What about the cases where this algorithm isn't used for fun photo
tagging?" he asked rhetorically. "What if another false positive
leads to someone being implicated for something they didn't do? Facebook is a
publicly traded company that uses petabytes of our personal data as their
business model — data that we offer to them, but at what cost?"
No comments:
Post a Comment