On Wednesday, October 18, Corsight’s AI Chief Privacy Officer Tony Porter OBE QPM LLB spoke to attendees of the Security Matters Digital Conference on the topic of Regulating Facial Recognition Technology. As a rapidly developing technology and a key strategy for preventing terrorist attacks across the globe, the topic is one in which governments, operators, and developers must tread carefully.
Mr. Porter was asked to share his insights, speaking as one of the UK’s foremost experts in the field; he established the National Surveillance Camera Strategy in his role as UK Surveillance Camera Commissioner at the Home Office from 2014-2021, having previously led the UK’s counter-terrorism investigations for the London 2012 Olympic Games. His focus — and one of his key contributions to the Corsight team — is helping guide government agencies, AI developers, and operators to stay within surveillance standards for the lawful and ethical use of facial recognition systems.
Mr. Porter began his discussion with a brief, hypothetical scenario in which a terrorist makes his way across Heathrow, and is quickly identified by AI-driven facial recognition and subdued. While naturally, all would applaud such a happy ending, the questions it raises are numerous and essential in the analyses of a regulator assessing a facial recognition deployment:
- How accurate is it? Might it trigger false positives, potentially frustrating or embarrassing innocent travelers?
- Is the software trained with a bias against specific races?
- Is there a human in the loop to verify the data before a physical response is launched?
- Can the software be explained to the public clearly and transparently? Has it been?
- Can we justify some level of invasion of privacy toward millions of people a year for a few single potential events?
Both for a technology developer and an operator looking to purchase such a system, these are just some of the initial questions that must be answered for regulators to approve such a deployment.
According to Porter, the question comes down to this: Is facial recognition in the public sector being used ethically to further safety and security, with full transparency? If so, regulators need to find a way to define the methodologies, apply guidelines, and allow these systems to be used.
The road to this goal, as he made clear, is not always straightforward; the case law is evolving, with policies varying dramatically based on region:
- The EU is looking to implement aggressive, conservative regulations, effectively preventing the use of these systems for now, except under limited, extreme circumstances
- In countries around the world with lax human rights records, there has been a spike in usage as they disregard many of the issues being grappled with elsewhere
- In the US, many states are dropping their restrictions as it becomes clear that the technology has developed rapidly, and previous drawbacks and limitations no longer apply
While acknowledging the valid objections and questions of naysayers, Mr. Porter is confident that challenges of ethics, automation, social tracking, and civil liberties can all be addressed.
Mr. Porter’s pragmatic -– and cautiously optimistic – message was that these can be solved technologically in many cases, and this reality is often recognized as regulations evolve. He compared this dynamic to the invention of the printing press, which triggered questions of copyright and libel, among others, none of which curtailed the use of the technology. Protecting airports, stadiums, buildings of critical national infrastructure and safe cities, and other locations should not be limited by outdated assumptions.
The solution, explained Porter, is to ensure that each implementation addresses, head-on, the initial questions brought up in the Heathrow example; instead of blanket prohibitions, governments must create guardrails and parameter definitions to permit ethical usage as a ‘value to society,’ allowing for the four strategies to ensure the safety of citizens: protect, prepare, prevent, and pursue.
Porters summarized by explaining that regulations to ensure compliance with civil rights principles can (and must) be written and applied to the reality of the technology they address: deployments can include a whole-system approach to incorporate human involvement, faces can be blurred, data quickly deleted, bias reduced to almost zero, and more. All these factors can be explained clearly and transparently, limiting the risk (or even fear among the public) of abuse.
Tony Porter is Corsight AI’s Chief Privacy Officer and the former UK Surveillance Camera Commissioner.