Credit: AI Trends
A San Francisco lawmaker is proposing what would be a nationwide first: a complete moratorium on local government use of facial-recognition technology. Introduced by San Francisco Supervisor Aaron Peskin, the Stop Secret Surveillance Ordinance would ban all city departments from using facial-recognition technology and require board approval before departments purchase new surveillance devices. The bill regulates only local use, not use by private companies: The face-unlock feature included on the latest iPhone model, for example, would still be legal.
In addition to banning facial-recognition technology, the ordinance stipulates that any department that wants to purchase new surveillance equipment—from CCTV cameras to social-media scanners and license-plate readers—must submit to the board of supervisors a “surveillance technology policy” laying out what information will be collected with the technology, how long it will be retained, with whom it can be shared, how members of the public can register complaints, and specified authorized and forbidden uses.
The proposal also bars city officials from using any data sourced from facial recognition. If police in a neighboring city wanted to share a list of suspects sourced from facial recognition, the San Francisco Police Department would be prohibited from using it, explains Lee Hepner, a legislative aide who helped draft the bill. He says this is only the first of many steps changing how the city balances policy and technology.
Regulating surveillance technology is difficult because data collected for one purpose can be used for another. That’s not always a bad thing. In New York, for example, city data tracking asbestos complaints were used to predict tenant harassment. In Sacramento, city officials tracked welfare recipients suspected of fraud by using license-plate data employed by police to find stolen vehicles. Peskin’s proposal is something of a “good faith” ordinance: Departments must explain how they plan to use the technology, but aren’t forbidden from using data in other ways.
This flexibility in how departments use data is balanced by the second part of the ordinance, which requires annual reviews. Yearly, departments must submit a “surveillance impact report” for board review and public discussion, explaining how technology was used and why. These impact reports include all uses of the data, costs (including personnel, maintenance, and equipment), and crucially, where the technology was used and crime statistics for those locations.
If city officials want to keep using surveillance cameras in a park to prevent car break-ins, for example, each year they would have to submit evidence that the cameras reduced crime.
“We want there to be a justification for use of the technology in the location,” Hepner says. “If the ostensible benefits of any of these surveillance technologies is the prevention of crime, then it’s helpful for the board to be able to track that. Over time, is this technology having a positive impact?”
The ordinance applies retroactively to all the surveillance systems and technologies already in use. Officials will have to submit impact reports annually and disclose the costs of surveillance tech already in operation, hopefully revealing to the public how deeply embedded surveillance already is in their daily lives. Even after approval, technologies can be rescinded at a later date pending the annual reviews.
But Peskin’s bill regulates face recognition as a tool for policing, not for commerce. Consumers use facial recognition to unlock phones, board flights, tag friends in wedding photos and children in summer-camp pictures, and even buy beer at baseball games.
Read the source article in The Atlantic.