7 minute read 31 Jan 2020
Traffic night freeway lights

How AI is establishing itself as the newest public safety officer

By George Atalla

EY Global Government & Public Sector Leader

Working with governments to address complex issues and build a better working world.

7 minute read 31 Jan 2020

AI can help combat crime and make communities safer. It has the potential to save resources and lives – but only if agencies use it well.

Public safety encompasses a wide array of government responsibilities, from policing to responding to natural disasters.

AI technologies hold the promise of improving safety across this spectrum. Its potential to reduce, prevent and respond to crimes, for example, creates a unique opportunity to establish safer communities: so far, it’s been used for facial and image recognition, managing crime scenes and detecting criminal behaviors.

As threats to public safety mutate faster and faster, AI holds the promise of levelling the playing field for governments.
Carl Ghattas
Managing Director, Government and Public Sector Cybersecurity Lead, EY

AI can also drastically streamline simple administrative tasks, while providing cutting-edge analysis far beyond human capabilities. As a result, it can save budgets, time and lives. Here, we focus on five areas where we’re seeing governments start to realize this immense potential.

Areas of public safety ripe for AI

1. Preventative policing

Governments around the world are using AI to predict and prevent crime. In the US, law enforcement agencies in city governments – including New Orleans and Los Angeles – have worked with the private sector to create predictive policing programs. These use data to trace people’s ties to other gang members, outline criminal histories and analyze social media posts. In one instance, a program could determine if a crime was gang related by assessing four criteria: the neighborhood and location of the crime, the number of suspects and the primary weapon used. Some AI developers assert their tool can also predict the likelihood of individuals committing violence or becoming victims of crime.

Another area that could benefit from predictive analysis is violent crime. The UK Government is developing a system called the National Data Analytics Solution (NDAS). This will use statistics and AI to assess the risk of someone committing or becoming a victim of these crimes, so agencies can intervene.

2. Supporting criminal investigations

While AI is well positioned to predict and prevent compromises to public safety, it also has a role to play when a crime, accident or disaster does occur. South Africa has been able to leverage ShotSpotter technology to combat two very different types of crime: wildlife poaching in Kruger National Park and gun violence in two townships in Cape Town. In 2018, evidence gathered from ShotSpotter sensors led to the successful conviction in a gang-related shooting in Cape Town after a ShotSpotter alert prompted police to review CCTV footage.

In addition to the criminal conviction last year, the technology has also contributed to a five-fold increase in the recovery of illegal guns within the two townships where the technology has been deployed, due in part to its ability to provide accurate data on gun activity.

3. Combatting terrorist threats

Many technology companies are positioning artificial intelligence as a powerful ally in fighting terrorism. Governments are keen to explore the approach: in 2018, the UK Home Office revealed a new AI tool that can detect 94% of Daesh (ISIS) propaganda with over 99.9% accuracy. It uses machine learning to determine whether the material in videos is terrorist propaganda. As the tool is integrated into the video upload process, it prevents extremist content from going online and deprives terrorist organizations of a valuable recruitment channel. Unfortunately, while the UK Government has invested heavily in the tool, tech firms have declined to adopt it to date, in part because of a lack of regulatory requirement making it a low order priority.

AI is also a valuable tool in identifying suspicious banking activities associated with terrorist financing. It can supplement existing systems for monitoring transactions with increasingly sophisticated machine analysis that spots anomalies in data and constantly updates known red flags. AI has the ability to mine data much more quickly, and along a greater number of alert criteria, than traditional auditing methods.

4. Responding to natural disasters

Thanks to factors such as climate change and increased urbanization, the number of natural disasters – and the amount of people they affect – continues to grow.

Governments around the globe have used AI as a relatively cheap and impressively effective way of detecting: 

  • Where natural disasters will occur
  • Which areas will be hit hardest
  • The mitigation systems that are most likely to fail
  • Which communities and demographics will be in the most danger
  • What actions are most likely to mitigate the impact of the disaster and its after-effects

On average, flooding in India causes economic losses of an estimated US$7.4 billion a year. After three months of flooding in 2018 left over 1,400 dead, the country’s Central Water Commission partnered with Google to create a flood warning system. The approach uses AI technologies, geospatial mapping and analysis of water data to warn when a flood is coming and where, so agencies can take action. CWC and Google sent out the first alert in September 2018, warning residents of Patna about heavy rain and likely flooding.

5. Crowd and traffic control

Whether they’re at sporting events, religious events or protests, crowds present logistical and safety challenges for public safety officers. Several governments have turned to AI technologies to reduce uncertainty and risk while moving crowds and responding to threats within them.

In Japan, the National Police Agency has begun experimenting with AI in the areas of terrorism and criminal investigations. This will include using AI to automatically spot abandoned objects and people behaving in an unusual way at large sporting contests, events and international conferences. The system will alert public safety officers, who can then assess the situation and determine a course of action. If successful, the Agency will apply the methods to the 2020 Olympics in Tokyo.

In 2019, Indian police turned to AI to ensure crowd management and safety during the two months of Kumbh Mela, the world’s largest religious gathering. Over 1,000 CCTV cameras collected data from the large festival site as AI technology allowed the police to monitor crowd density, identify and monitor suspicious activity, and better manage the flow of traffic. AI was even used to optimize waste collection as bin sensor data and traffic data were used to route collection vehicles. With over 150 million attendees, 2019’s Kumbh Mela avoided the fatal stampedes that are a historical concern for the festival.

While AI can bring big benefits for society, it can also erode presently accepted standards of privacy and civil liberties.

Progress brings risk

While AI holds significant promise for improving public safety while reducing costs, citizens and governments also need to consider its potential risks. The main risks in this area are:

These examples show that at this point in its development, governments and public safety departments can’t simply “turn on” AI and take its findings as fact. On top of that, while its analysis can avoid human error and biases, AI actually compounds any errors that are engrained into system programming. Machine learning can absorb the biases of its designers, and codify them through learned associations based on inaccurate foundational premises. Governments need to know that algorithms are fair and trustworthy before they apply AI.

In a recently released report, the American Civil Liberties Union put forward some worrying potential scenarios. For example, a sheriff might receive a daily list of citizens who appear to be under the influence in public. The indicator could be something like changes to gait, speech or other patterns caught on surveillance cameras.

So while AI can bring big benefits for society, it can also erode presently accepted standards of privacy and civil liberties.

Factors governments need to consider

To guard against erosions of civil liberties and citizen privacy, governments will need to apply a set of fundamental principles for the use of AI in public safety. These should include:

  • Building safeguards to protect privacy and prevent biases
  • Ensuring AI efforts are consistent with applicable legal principles
  • Creating trust by communicating with communities in an active and transparent way
  • Including human insights and judgment in the final analysis of AI activities

If governments adhere to these principles, they could more effectively manage the transformative impact of AI and positively impact the lives of its citizens while protecting their privacy.


AI can play an important role in areas of public safety as far-ranging as narcotics, crime deterrence, natural disaster response and crowd control. But as governments explore the promises AI offers, they must also consider its risks to civil liberty and privacy, as well as its mixed record on accuracy and bias.

About this article

By George Atalla

EY Global Government & Public Sector Leader

Working with governments to address complex issues and build a better working world.