Written for Daily Hive by Chris Hobbs, Co-founder and President of Vancouver-based digital innovation company TTT Studios.
I am fortunate to work in a candy land environment where, like the Genie in Aladdin, we get to realize our clients’ technological wishes. A benefit of developing software using the latest tools and frameworks is having a head start in understanding the technologies that will be part of our near-future lexicon. Of the many Artificial Intelligence projects we are working on, the branch of AI that excites me the most is facial recognition. This is a nascent technology that can enrich our daily lives with more personalized experiences.
One can easily argue that the Internet was relatively mundane until you were able to log in to a site such as Yahoo News and receive media catered to your personal interests. I believe that facial recognition has the opportunity to do the same in our daily offline lives. It brings us closer to a future in which a sales clerk already knows my shirt size and colour preferences before I reach the clothing rack, or a future in which I no longer need to print a boarding pass for my flight because my face is the ticket.
- See also:
I am not naive. I understand that there are major challenges with facial recognition such as privacy management and the potential for misuse. Law enforcement agencies in the US, including the FBI and ICE, have non-consensual access to driver’s license photos from DMV databases. Using facial recognition, they can comb these images to make arrests for misdemeanours and arrange deportations for undocumented immigration based on suspect matches. In Canada, although Toronto police claim that their year-long pilot with facial recognition has been effective, there is no legal precedent holding them accountable to responsible use of the technology.
In this context, prevailing narratives in media frame facial recognition as discriminatory and invasive. San Francisco, Somerville, and Oakland have recently enacted city ordinances barring government agencies from using the technology in light of escalating public concern.
Given the way the technology has been used, this doesn’t come to me as a surprise. In attempts at finding matches to suspects in their mugshot database, police departments in the US have been feeding their facial recognition systems low-quality, modified, and, in many cases, outright incorrect input data. They have gone to the lengths of running pixelated surveillance camera footage, computer generated facial features, composite sketches, and celebrity doppelgängers through their facial recognition software. Needless to say, facial recognition for policing use cases has been abused and tested in scenarios that facial recognition models were never built for.
Public outcry over the inaccuracy and abuse of facial recognition has been intensified by concerns that it facilitates racial profiling. A study titled Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, evaluates gender and racial bias in facial analysis and confirms what many fear: identifying darker-skinned women results in error rates of up to 34.7%, compared to a maximum error rate of 0.8% identifying lighter-skinned males.
But the crux of the problem isn’t with facial recognition itself, it’s with the data used to train it. Limiting our ability to build effective facial recognition models is the gross underrepresentation of women and people of colour in open source datasets. My hope is that concerns with facial recognition will drive, and not suppress, R&D for more gender and race-diverse datasets. Take IBM, for example, who recently released an annotated dataset of 1 million images called Diversity in Faces, which encourages impartiality and accuracy in facial recognition. If researchers are intentional about diversity when evaluating the datasets and models they choose to implement, exponentially better versions of existing facial recognition systems are wholly feasible.
Even with these improvements, however, facial recognition may never be an effective tool for policing—and it doesn’t have to be. Where facial recognition has the most potential to leave a positive impression on our lives isn’t in law enforcement, but in consumer use cases spanning across many industries including healthcare, automotive, events, and in corporate spaces.
In 2017, researchers from the National Human Genome Research Institute successfully used facial recognition to deliver early diagnoses for genetic abnormalities including Down syndrome and DiGeorge syndrome. Because the phenotypic manifestations of DiGeorge vary across ethnic groups, healthcare providers often struggle to diagnose the condition, especially in non-Caucasian populations. Where facial recognition in law enforcement aggravates racial profiling, it benefits them in medical diagnosis. This technology is constantly improving and this year, American artificial intelligence company FDNA published a study outlining the performance of their facial analysis software, DeepGestalt, that delivers a correct diagnosis 20%+ more often than clinicians.
For TTT Studios, what inspired our exploration into facial recognition was the desire to create a system better than Apple’s Face ID, which couldn’t tell me apart from my identical twin brother, David. TTT Studios’ brainchild, AmandaAI, is our proprietary facial recognition platform. Not only does it pass the twin test, but it also achieves 99.8% accuracy on the public dataset “Labeled Faces in the Wild.” In addition, our technology has patented anti-spoofing capabilities which allows it to distinguish between real humans and digital impersonations.
In the events space, expect facial recognition to drastically improve efficiency. We’ve developed an event check-in tool employing facial recognition technology that greets attendees and prints out their name badges as they enter an event venue. If attendees choose to opt-in by uploading a picture of themselves during event registration, then checking-in will be a quick and paperless process. We piloted our software in April at the SingularityU Canada Summit and delivered a 1,200 person event with essentially no line-up.
During these pilot projects I witnessed a hard truth. Any apprehension in the use of facial recognition is always trumped by the convenience it provides. When people stuck in the slow-moving line-up for traditional event registration saw no line for those who registered with their photos, a large percentage of them were frantically on their mobile devices trying to associate their tickets with their faces.
As facial recognition matures, so will the regulations that govern it. As it stands, Canada’s law around data privacy, the Personal Information Protection and Electronic Documents Act (PIPEDA), doesn’t dictate how facial recognition data should be handled. Until legislation specific to the management of facial recognition data is passed, it’s the responsibility of individual firms distributing the software to hold themselves accountable to privacy standards that prioritize user consent and transparency.
At TTT Studios, we take inspiration from existing and upcoming legal frameworks such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) to create our own policies governing ethical facial recognition data management. For example, we build functionality into our software that allows our clients to delete data collected for standalone events after a specified timeframe, end users are given the option to edit and delete data upon request, and they can also rest assured that their data will never be used for reasons other than the one for which they gave consent.
The potential of facial recognition is far-reaching, and unlocking it starts with embracing the challenges that it poses. Increased dialogue around ethical use cases of facial recognition will help us empathize with our end users, iterate increasingly robust versions of our technology, and steer us towards a future where facial recognition is used safely, and for the betterment of humanity.