Credit: AI Trends
The details of the crime were uniquely specific: Wielding a hypodermic syringe as a weapon, a man in New York City attempted to steal a power drill from a Home Depot in the Bronx. After police arrested him, they quickly ascertained that he’d done the same thing before, a few weeks earlier at another Home Depot, seven miles away in Manhattan.
It wasn’t a detective who linked the two crimes. It was a new technology called Patternizr, an algorithmic machine-learning software that sifts through police data to find patterns and connect similar crimes. Developed by the New York Police Department, Patternizr is the first tool of its kind in the nation (that we know about). It’s been in use by NYPD since December 2016, but its existence was first disclosed by the department this month.
“The goal of all of this is to identify patterns of crime,” says Alex Chohlis-Wood, the former director of analytics for NYPD and one of the researchers who worked on Patternizr. He is currently the deputy director of Stanford University’s Computational Policy Lab. “When we identify patterns more quickly, it helps us make arrests more quickly.”
Many privacy advocates, however, worry about the implications of deploying artificial intelligence to fight crimes, particularly the potential for it to reinforce existing racial and ethnic biases.
New York City has the largest police force in the country, with 77 precincts spread across five boroughs. The number of crime incidents is vast: In 2016, NYPD reported more than 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies. Manually combing through arrest reports is laborious and time-consuming — and often fruitless.
“It’s difficult to identify patterns that happen across precinct boundaries or across boroughs,” says Evan Levine, NYPD’s assistant commissioner of data analytics.
Patternizr automates much of that process. The algorithm scours all reports within NYPD’s database, looking at certain aspects — such as method of entry, weapons used and the distance between incidents — and then ranks them with a similarity score. A human data analyst then determines which complaints should be grouped together and presents those to detectives to help winnow their investigations.
On average, more than 600 complaints per week are run through Patternizr. The program is not designed to track certain crimes, including rapes and homicides. In the short term, the department is using the technology to track petty larcenies.
The NYPD used 10 years of manually collected historical crime data to develop Patternizr and teach it to detect patterns. In 2017, the department hired 100 civilian analysts to use the software. While the technology was developed in-house, the software is not proprietary, and because the NYPD published the algorithm, “other police departments could take the information we’ve laid out and build their own tailored version of Patternizr,” says Levine.
Since the existence of the software was made public, some civil liberties advocates have voiced concerns that a machine-based tool may unintentionally reinforce biases in policing.
“The institution of policing in America is systemically biased against communities of color,” New York Civil Liberties Union legal director Christopher Dunn told Fast Company. “Any predictive policing platform runs the risks of perpetuating disparities because of the over-policing of communities of color that will inform their inputs. To ensure fairness, the NYPD should be transparent about the technologies it deploys and allow independent researchers to audit these systems before they are tested on New Yorkers.”
New York police point out that the software was designed to exclude race and gender from its algorithm. Based on internal testing, the NYPD told Fast Company, the software is no more likely to generate links to crimes committed by persons of a specific race than a random sampling of police reports.
Read the source article in Governing.