Truth not absolute with AI deception technology
Artificial intelligence (AI) quickly has become a transformative technology impacting many aspects of our lives through augmentation of processes and tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making and language translation. This technology allows for machines to learn from experience, refine new data inputs and perform tasks with almost human-like responsiveness.
As companies increasingly rely on AI technology to solve the most complex and pressing business challenges, law enforcement has turned to AI as a tool to help execute on the multifaceted mission of modern-day policing. However, for all the potential that AI possesses for law enforcement, we are still at the early stages of achieving fully viable and legally permissible options to meet law enforcement needs — particularly when it comes to capabilities such as video analytics and facial recognition.
Both have introduced challenges related to accuracy and bias, already generating skepticism by the public and, in some cases, legal action or bans by elected officials in pockets around the country. The latest development garnering attention in the world of AI and law enforcement is “deception analysis,” which utilizes AI technology to assess an individual’s truthfulness in criminal investigations and judicial administrative proceedings.