More than a year ago, a tech-savvy PH.D wrote an interesting article about whether we can somehow reduce the violence caused by school shooters.
Eric Haseltine, a one-time Disney executive who later worked for the CIA and National Security Agency, wrote incisively in a Psychology Today blog (May 19, 2018) titled, “Can AI Reduce Violence form School Shooters?
He cited a Secret Service study that a typical shooter had informed someone — at least in some implied way — of their plans. It may have been a peer, or school staff was aware that the student was troubled. , Sociologist Katherine Newman noted school shooters “let out hints many months in advance.”
So much of what Haseltine wrote is on point. People are unlikely to warn authorities because they may lack confidence or feel helpless, or they fear creating a false alarm. The peers perhaps don’t want to be “telling on someone.”
And what about the authorities? The more we encourage peers and teachers and school administrators and others to come forward, the greater the number of “false positives.” So Haseltine in his article proposed an idea that my colleagues and I already had developed at the time — the use of artificial intelligence and machine learning to find what I term “persons of interest” who are a danger to themselves and to others, or at least, may be. This includes, of course, potential school shooters.
Haseltine correctly pointed out that “AI algorithms are not subject to social pressures.” He even suggests combining a search of social media with privacy technologies to “protect civil liberties.”
Right now, I’m not sure we need to intrude on private communications on social media, and my colleagues and I have developed a way to monitor public social media and, using AI, to identify those persons of interest. It would not be up to us — to the service that we now can provide –to intercept private communications. That advanced tool might require attention from the authorities.
What we can do is to provide an alert system — and in real time — based solely on how we monitor and analyze public social media, and our use of artificial intelligence.
Haseltine has it right — without knowing what we can do — he has indeed noted that we can use AI to analyze not just text, per se, but the emotions — the level of frustration and anger.
He mentions other data — information that can be combined with what we do. For example, responsible authorities might deal with registries of gun ownership, school surveillance camera footage, or files of school discipline problems There also is demographic data — he notes that most shooters are un-athletic whites with above average grades in rural areas.
A very perceptive and responsible expert in his field, Haseltine notes that emerging technologies can use encryption to protect private data when collected, so that the data would not be tied to the identity of a person by name. Then, and only then, when a red flag was triggered (and the system my colleagues and I have developed has such a red flag), the authorities could be notified about the student by name.
In our approach, we do not even have access to — or utilize — private social media communications, only public social media. Nor do we have access to the other information that a school district or authorities might have.. That would be up to the responsible officials authorities to secure and correlate.
But what we provide — no one else does — the use of sophisticated and advanced proprietary AI to produce warnings in real time. School shooters are a priority, for sure. But there are many other uses for this new AI/social media technology.