A pilot program testing AI-powered weapons scanners inside some New York City subway stations this summer did not detect any passengers with firearms — but falsely alerted more than 100 times.
In total, there were 118 false positives — a rate of 4.29%.
Earlier this year, investors filed a class-action lawsuit, accusing company executives of overstating the devices’ capabilities and claiming that “Evolv does not reliably detect knives or guns.”
I mean, in terms of performance, I’d be more concerned about the false positive rate than the false negative rate, given the context. Like, if you miss a gun, whatever. That’s at worst just the status quo, which has been working. Some money gets wasted on the machine. But if you are incorrectly stopping more than 1 in 25 New Yorkers from getting on their train, and apply that to all subway riders, that sounds like a monumental mess.
I mean, in terms of performance, I’d be more concerned about the false positive rate than the false negative rate, given the context. Like, if you miss a gun, whatever. That’s at worst just the status quo, which has been working. Some money gets wasted on the machine. But if you are incorrectly stopping more than 1 in 25 New Yorkers from getting on their train, and apply that to all subway riders, that sounds like a monumental mess.