Since the dawn of humanity, people have compensated for their inherent physical and mental limitations by using tools to enhance their abilities. The applications of these depend on the intentions of the user and, as a result, tools have been employed in all kinds of disciplines and pursuits both benevolent and malignant. This duality is typified in crime: just as tools can prevent and detect wrongdoing, they can also be used to empower it.
In today’s world, one of the most versatile and advanced human tools, AI, has enabled the replication and augmentation of human capabilities to unimaginable scales. While this has resulted in countless improvements to the processes that govern society, as the capabilities of AI-based technologies grow, so too has their potential for criminal exploitation.
of transactions flagged by current systems end up being malicious only.
How can AI be used for crime?
Historically, phishing attacks, cybercrimes where counterfeit emails pose as legitimate institutions to lure targets into providing sensitive information, have been indiscriminate and generic. In an attempt to overcome this, attacks classed as spear-phishing were designed that can be tailored to exploit a target’s specific vulnerabilities. These are up to 4 times more effective than their normal counterparts but are labour-intensive and can only be performed at smaller scales. With technology such as DeepFish AI, however, systems can automatically learn from past phishing attacks, evade spam filters, and process huge data sets to find the target’s vulnerabilities, enabling cybercriminals to send out automated personalised attacks at a grand scale.
AI can also be exploited to achieve greater efficiency, presence, and specificity in fake news. The potential for political manipulation and misinformation is alarming as several versions of the same content could be disseminated through different sources using the technology to
boost its visibility and credibility. Highly advanced forms of content manipulation in the form of AI-generated realistic video, audio, and text called Deepfakes have already proved to be effective and dangerous. In March 2019, thieves used voice-mimicking software to imitate a CEO calling a subsidiary director which resulted in the transfer of $243,000 to a fraudulent account.
This is only a snapshot of a few of the ways AI can be exploited for illegal activity. For a more in-depth review, I direct you to a comprehensive study of AI and crime by Keith Hayward and Matthijs Maas.
How can AI be used against crime?
In what epitomizes the double-edged blade of AI and crime, the technology can be used as much as an obstruction to phishing as it has been an enabler to it. Take, for instance, AI like ‘Panacea’ which uses natural language processing to respond to fraudulent emails, engaging attackers in conversation to learn their true identity and waste their time. The evolutionary battle between efforts to enhance and hamper phishing can even represent a microcosm for the much larger issue of advancements in anti-crime technology incentivizing and pressuring the development of more effective and potentially harmful counter-mechanisms.
AI is also highly applicable for preventing and detecting financial crime. At the moment, only 2% of transactions flagged by current systems end up being malicious. AI can significantly reduce the number of false alerts and uncover cases that were missed using conventional rules. To put into perspective its effectiveness, even though banks now report 20 times more suspicious activitythan in 2012, AI has enabled them to halve false alerts.
Considering policing, AI can be leveraged to process large amounts of data and pick up subtle connections between cases to alert officials of patterns that are present. Machine learning in particular has proved useful for identifying evidential links by, for instance, recognizing a gun’s caliber and model from only audio shot recordingsand matching crime-scene gunshot residue with the characteristics of certain ammunition types.
In terms of surveillance, AI facial recognition can be layered on top of current CCTV architectures, technologies like ‘speech2face’ can reconstruct facial images from audio alone, and the Pentagon’s Jeston laser can identify a person based on their heartbeat at a distance of up to 200 metres.Although these kinds of technologies can discourage and supress crime, they have evident downsides in the range of ethical concerns they create, including privacy invasion and the risks of misuse and hacking.
Downsides of using AI against crime
One of the most troubling issues with the use of AI to prevent and detect crime is that it can draw biased conclusions on factors like ethnicity, gender, and age under the notion of objectivity. As a result, it could inadvertently worsen discrimination and inequality in criminal justice.
Considering privacy concerns, companies can experience heavy backlash from customers who worry about data misuse and exploitation by invasive surveillance. Recently, a European bank was forced to backtrack on a plan to monitor social media accounts for mortgage applications over outcry of “Big Brother tactics” Some have even argued that facial recognition is categorically different from other surveillance and should be prohibited while the ACLU has expressed that the systems could be discriminatory and subject to abuse.
A case can also be made that due to the increased potency of means to detect illegal activity with AI, criminals could resort to more extreme and violent measures to outmaneuver it and customers may shift to less regulated markets. Moreover, if employees become overly reliant on crime-fighting tools, they may become complacent and less motivated to do the right checks themselves.
The Big Picture
One of the core issues with AI and crime is that many of the capabilities of AI that are exploited for crime are critical and highly beneficial in other disciplines. Consequently, it would seem to be more feasible to focus on the advancement of crime countermeasure technology rather than curbing the use of AI that has applications for both crime and valuable pursuits. However, as explored above, this too has several drawbacks since it can generate biases, be perceived as invasive, and encourage more extreme behaviour by both criminals and consumers. Thus, for businesses and consumers alike it will be critical to consider whether the advantages of AI-crime fighting solutions outweigh their costs.
Some high profile debates over the use of algorithms to predict re-offence rates for pre-trial bail decisions have even contemplated the shift of conventional justice and policing practices from those aimed to detect violations to those seeking to predict and prevent them entirely. In what is becoming eerily reminiscent of a certain sci-fi film (Minority report (2002)), the advancement of crime-fighting technology is creating increasingly complicated ethical challenges. Can and should there be different treatment based on a crime that has not happened yet? How certain can we be that it will happen and what should be the threshold of certainty past which it would be “right” to issue a penalty? As AI continues to reach new heights, we may well be on the cusp of witnessing a paradigm shift in the way crime is perceived, executed, and regulated.
Written by Miguel Larrucea Long