In June 2023 the UK government announced that it would host an inaugural global summit on the subject of artificial intelligence (AI) safety. Appropriately being held at Bletchley Park where British Codebreakers broke the ‘Enigma Code,’ during World War 2. This event in November will draw leading minds from academia, industry and beyond to explore the safe and responsible development of AI around the world. Leaders of the G7 have expressed their shared view that the future of AI should be based on trust and shared democratic values.
The British Prime Minister has urged world leaders to view AI with the same humanitarian concern as they would climate change. Meanwhile, the EU continues to refine its work in producing the EU/Artificial Intelligence Act which aspires to become the first comprehensive legal framework for AI. It is now in a stage of negotiations with member states. It is possible that even if adopted quickly this impressive ambition may not become applicable until 2025. The UK approach to harnessing AI has yet to coalesce into a coherent framework beyond a data protection-themed aspiration.
Meanwhile, the evolution of AI capabilities and application/integration into society continues at pace. In the context of facial recognition technology (FRT), high-quality companies such as Corsight AI are delivering ethically produced AI with exceptional performance of accuracy and equitability which is far in excess of anything on the market even a couple of years ago. Those producers of AI are well aware of the importance of integrity in the credentials of the technology they produce, particularly with regard to compliance with global and national standards of accuracy, equality, cyber and data security. It is not surprising therefore that FRT is becoming increasingly prevalent as confidence in the use of the technology outgrows the sometimes distorted narratives of those voices opposing its use.
It is important that those organizations who operate AI know how to operate it properly and within the scope of relevant laws. An essential ingredient in that regard is the provision of relevant training to people. The ‘human in the loop’ is the key assessor and decision maker where FRT is operated and should be properly trained in that regard. High-quality AI requires high competence in human application.
It is not unreasonable to expect that organizations that operate FRT have suitable measures in place to ensure that the people they choose to use it are ‘trained’ in such matters as applicable laws, regulatory guidance, organizational policies, and operational parameters. Equally, operatives of the technology ought to be trained in how to operate the technology safely, how to recognize when things are working well, and not working well, when risks arise and how to deal with them, how to identify a compromise or security breach, what action to take, how to recognize when proportionate application may be at risk of becoming disproportionate or how to recognize a malfunction of the equipment. In today’s rapidly advancing digital and regulatory landscape staying up to date with the training requirements of operators of the equipment should be a fundamental undertaking of organizations.
The relationship between producer and user is important in that regard as experiential learning is a two-way street. In recognition of the importance of such matters, Corsight AI has launched FaceComply. This is a unique collaborative offering that the organization has developed to offer a helping hand to our customers to navigate through the legal and regulatory maze when using our technology and to ensure that their people are suitably equipped with relevant knowledge. This unique service arises from our commitment to ensure that the technology we provide is used as a force for good in a world that our AI helps shape. The use of FRT to deliver a safer society does not have to be an Enigma… nor should it be.
Tony Porter is Corsight AI’s Chief Privacy Officer and the former UK Surveillance Camera Commissioner.