School supervision will never protect children from gunfire


If we are trust school monitoring system vendors, K-12 schools will soon operate in a manner similar to an agglomeration Minority Report, Person of interestand Robocop. “Military grade” systems would siphon data on students, picking up just the hint of harmful ideas, and dispatch officers before would-be perpetrators could carry out their dastardly deeds. In the unlikely event that someone was able to evade predictive systems, they would inevitably be stopped by next-generation weapons detection systems and biometric sensors that interpret a person’s gait or tone of voice, alerting authorities to impending danger. The final layer could be the most technologically advanced – some form of drone or perhaps even a robot dog, which could disarm, distract or incapacitate a dangerous person before any real damage is done. If we invest in these systems, the thinking goes, our children will finally be safe.

Not only is this not our present, it will never be our future—no matter how expansive and intricate surveillance systems become.

Over the past few years, a host of companies have sprung up promising various technological interventions that will reduce or even eliminate the risk of school shootings. Proposed “solutions” range from tools that use machine learning and human tracking to predict violent behavior, to artificial intelligence paired with cameras that determine the intent of individuals through their body language, to microphones that identify the potential for violence based on tone of voice. . Many of them use the specter of dead children to hack their technology. Surveillance company AnyVision, for example, uses images from the Parkland and Sandy Hook shootings in presentations touting its facial and gun recognition technology. In the immediate aftermath of the Uvalde shooting last month, Axon announced plans for a Taser-equipped drone as a means of dealing with school shooters. (The company later put the plan on hold, after members of its ethics committee resigned.) The list goes on, and each company wants us to believe that only it has a solution to this problem.

The failure here is not only in the systems themselves (Uvalde, for example, seems to have at least one of these “safeguards”), but also in the way people perceive them. Much like the police itself, any failure of a surveillance or security system usually results in people calling for more extensive surveillance. If a hazard is not anticipated and prevented, companies often cite the need for more data to fix flaws in their systems — and governments and schools often accept this. In New York, despite the numerous failures of surveillance mechanisms to prevent (or even catch) the recent subway shooting, the city’s mayor decided to double down on the need for even more surveillance technology. Meanwhile, city schools are reportedly ignoring a moratorium on facial recognition technology. The New York Times reports that US schools will spend $3.1 billion on security products and services in 2021 alone. And the recently passed gun law includes another $300 million to increase school security.

But at their root, what many of these predictive systems promise is a measure of safety in situations where it cannot exist. Tech companies consistently posit the notion of complete data, and therefore perfect systems, as something just over the next ridge – an environment where we are so completely monitored that any anti-social behavior can be predicted and violence prevented. But a comprehensive data set of ongoing human behavior is like a horizon: it can be conceptualized but never actually reached.

Currently, companies engage in various bizarre techniques to train these systems: Some stage fake attacks; others use action movies such as John Wick, hardly good indicators of real life. At some point, as creepy as it sounds, it’s possible these companies will train their systems on real-world data. However, even if footage of actual incidents became available (and in the large quantities these systems require), the models would still fail to accurately predict the next tragedy based on previous ones. Uvalde was different from Parkland, which was different from Sandy Hook, which was different from Columbine.

Technologies that offer predictions about intentions or motivations make a statistical bet on the likelihood of a given future based on what will always be incomplete and decontextualized data, regardless of its source. The basic assumption when using a machine learning model is that there is a pattern to be identified; in this case, that there is some “normal” behavior that shooters exhibit at a crime scene. But finding such a pattern is unlikely. This is especially true given the almost constant changes in teenage lexicon and practices. Probably more than many other segments of the population, young people are changing the way they speak, dress, write and present themselves – often explicitly to avoid and escape the watchful eye of adults. It is almost impossible to develop a consistently accurate model of such behavior.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *