by Robert J. Hansen
San Francisco, CA – The San Francisco Board of Supervisors approved on Tuesday a proposal authorizing the police department to use potentially lethal force with remote-controlled robots.
The 8-3 vote came after an hours-long debate and an amendment specifying that officers could use robots only after using alternative force or de-escalation tactics and were unable to subdue the suspect through alternative means.
Supervisor Dean Preston told the Vanguard this week that he doesn’t think most people in San Francisco support this policy and believed if placed on the ballot, it would go down overwhelmingly, just as an initiative for unrestrictive taser use went down in 2018, 62 percent to 38.
“I’m getting tons of communications of people who are against what the board did (Tuesday),” Preston said.
Citizens were not permitted to speak during public comment on the police equipment policy and speakers were specifically warned that they wouldn’t be allowed to talk about the policy if they called in.
Board President Shamann Walton questions whether giving police this authority will make San Franciscans safer.
“I don’t see how a robot being armed with a weapon would save lives,” Shamann said. “The last thing I want is a robot responding to a mass shooting at my children’s school or my grandchildren’s school.”
The Supervisors in support of the policy claim it’s necessary to keep police officers and the city safe.
“These are the last-resort measures used after all other measures, to save lives,” Supervisor Catherine Stefani said. “I will be voting in support of this measure today, in our continued effort to keep the city safe.”
Stefani acknowledged that the city never has had to use this kind of weaponry but it is necessary because police are “increasingly outgunned.
“The militarization of our society exploded when assault rifles for citizens were allowed,” Stafani said. “Our police increasingly find themselves outgunned.”
The last time a SF police officer was killed by gunfire was Officer Bryan Dennis Tuvera on December 23, 2006.
San Francisco Police Chief William Scott said the use of robots in potentially deadly force situations is a last resort option, in a press statement.
“We live in a time when unthinkable mass violence is becoming more commonplace. We need the option to be able to save lives in the event we have that type of tragedy in our city,” Chief Scott said.
A new state law, AB 481, requires police to seek approval from their appropriate government body and is intended to allow the public a forum and voice in the acquisition and use of military-grade weapons that historically have had a negative effect on communities, according to the legislation.
Kat Scott, a Bay Area roboticist, wrote a letter protesting the draft policy to the Board of Supervisors.
“I have reviewed the robots currently in SFPD’s possession; they were not designed for lethal force. As such, any novel armaments attached to these robots will be, at best, jury-rigged, and pose a danger to both SFPD and the citizens of San Francisco. The current policy permits the use of technology that lacks sufficient regulation and could create dangerous legal precedents. More insidiously, the policy as it stands will create a financial incentive to create weaponized robots for domestic use against American citizens at a time when no meaningful laws exist to regulate their use,” Scott’s letter said.
Supervisor Connie Chan, a member of the committee that sent the proposal to the full board, was asked why the use of force by robots was needed but could not be reached for comment.
Chan told the Associated Press that she understood concerns over the use of force but said that according to state law she is “required to approve the use of these types of equipment.”
Other equipment policy concerns were raised by the American Friends Service Committee (AFSC) which asked that all of the SFPD’s assault rifles be included in the policy.
Currently, 375 rifles are being omitted because Chief Bill Scott defines them as “standard-issue service weapons.”
Their omission could mean less oversight over how the weapons are used, according to the AFSC.
SF police have 12 fully functional robots at their disposal, according to SFPD spokesperson Robert Rueca.
Rueca said the robots have never been used to attack anyone.
“The robots are remote-controlled and are typically used to investigate and defuse potential bombs or to surveil areas too awkward or dangerous for officers to access,” Rueca said.
Throughout the U.S. there have been very few situations where lethal force was used with a robot. Once was in 2016 when Dallas police strapped plastic explosives to a robot to blow up a sharpshooter who had killed five police officers.
One of the SFPD’s robots, the Remotec F5A, is the same model as the one used by the Dallas police.
“It is ridiculous to give the police killer robots,” an anonymous caller said before he was swiftly cut off from the meeting.
There’s always Asimov’s laws of Robotics to start.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But I see this going more like Robocop. He started with 4 Directives:
“Serve the public trust”
“Protect the innocent”
“Uphold the law”
“Any attempt to arrest a senior officer of OCP (or insert Salesforce, Twitter, Google) results in shutdown” (Listed as [Classified] in the initial activation)
But then the progressives in San Francisco would add to the robot’s (or cyborg) programing some of which:
233. “Restrain hostile feelings”
234. “Promote positive attitude”
235. “Suppress aggressiveness”
236. “Promote pro-social values”
238. “Avoid destructive behavior”
239. “Be accessible”
240. “Participate in group activities”
241. “Avoid interpersonal conflicts”
242. “Avoid premature value judgements”
243. “Pool opinions before expressing yourself”
244. “Discourage feelings of negativity and hostility”
245. “If you haven’t got anything nice to say don’t talk”
246. “Don’t rush traffic lights”
247. “Don’t run through puddles and splash pedestrians or other cars”
248. “Don’t say that you are always prompt when you are not”
249. “Don’t be over-sensitive to the hostility and negativity of others”
250. “Don’t walk across a ball room floor swinging your arms”
254. “Encourage awareness”
256. “Discourage harsh language”
258. “Commend sincere efforts”
261. “Talk things out”
262. “Avoid Orion meetings”
266. “Smile”
267. “Keep an open mind”
268. “Encourage participation”
273. “Avoid stereotyping”
278. “Seek non-violent solutions”
These new directives would make the robot/cyborg police officer ineffective at their duties.