Unless it's completely clear that it's not a gun, the reviewer is essentially always going to pull the alarm. The risk of a false alarm is going to be seen as minimal, while the risk of a false negative is catastrophic.
False alarm makes the news for now because it's novel, we all go "What the hell, guys?" and life goes on.
Nobody wants to end up sitting in front of a prosecutor, the media, etc explaining why they chose not to pull the alarm, when the AI _clearly_ identified the gun, and instead chose to let all those kids die.
>The only way this gets fixed is if there are consequences at every level for false positives.
Do we really want consequences for false positives? If a kid is smoking a cigarette in the bathroom and the smoke detector goes off, the school should evacuate. The Smoke Alarm went off. No principal is going to sign off on the assumption that "Timmy is smoking, it's not a real fire". The principal shouldn't be punished for responding to the alarm. Timmy...probably should get reprimanded, but that feels off-metaphor.
In the example we are given, Timmy did nothing wrong. Having a clarinet is not contraband, and he should not be punished. The admin who called a lockdown did nothing wrong, as they were responding to the system in the way they were trained to use it. This is all in the name of safety, where things are done in 'an abundance of caution'.
>"It's not my fault the cops shot the kid, the system said it was a gun."
No, its the cop's fault. The cop hasn't been trained to use the AI security system, and is instead given their own SOP for assessing threats.
That sounds good on paper, but is really impossible to implement in any practical way.
In this case, the kid was holding the clarinet like a weapon, and though we have not seen the actual video, the descriptions of it make it sound like overall resolution was poor.
The alternative to the false positive here, is to not report anything that you cannot be 110% certain of, which means that you're likely to miss some true positives.
Overall this situation mostly reads like everything worked as intended, and the press turned it into more than it needed to be. School shooting are a real thing, there is plenty of evidence of that. Weapons detection has become a necessary component of a school safety strategy. For many reasons, it is not practical to have personnel at the school, or within the district, act as the first-pass reviewer of AI detections of weapons.
Don't be defeatist. The situation under consideration here is probably monitored by security cameras and body cams end to end. Everyone not following correct protocol did so on camera. Punishing willful ignorance and incompetence is certainly possible.
One approach for this is that the person who makes the call needs to be on-site and in the front of the situation --- similarly, a judge signing off on a No-Knock Warrant --- the judge needs to be at least be present, and should be required to walk through the building/home/apartment after the warrant is served. If it's not important/severe enough for a judge to do this, then I would argue that there's no need for the "no knock" aspect.
Unless it's completely clear that it's not a gun, the reviewer is essentially always going to pull the alarm. The risk of a false alarm is going to be seen as minimal, while the risk of a false negative is catastrophic.
False alarm makes the news for now because it's novel, we all go "What the hell, guys?" and life goes on.
Nobody wants to end up sitting in front of a prosecutor, the media, etc explaining why they chose not to pull the alarm, when the AI _clearly_ identified the gun, and instead chose to let all those kids die.