SAN FRANCISCO, California — San Francisco recently voted to ban its government agencies from using facial recognition technology.
The city’s Board of Supervisors voted 8-1 to approve the proposal, set to take effect in a month, that would bar city agencies, including law enforcement, from using the tool.
The ordinance would also require city agencies to get board approval for their use of surveillance technology, and set up audits of surveillance tech already in use.
Other cities have approved similar transparency measures.
In a presentation at the Women In Tech Conference held at Folketshus in Stockholm back in 2018, Joy Buolamwini, a researcher at the M.I.T. Media Lab — presented the faults of using facial recognition technology.
“Facial analysis technology is often unable to recognize dark skin tones,” said Joy Buolamwini during her keynote speech. “This bias can lead to detrimental results” — and she urged her colleagues to create more inclusive code.
“When the person in the photo is a white man, the software is right 99 percent of the time.”
“But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender.”
These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.
American civil liberty groups have expressed unease about the technology’s potential abuse by government amid fears that it may shove the United States in the direction of an overly oppressive surveillance state.
Written by Andrew Mitchell for Woodlawn Post Technology