YouTube decided to to remove videos that are “hateful” and “racist.”
Now, I’m not going to address the issue of who determines which videos meet that test. Do you hate young people, or seniors? Do you hate Catholics, or just those who are doctrinal? Do you hate Jews, or just Zionists? If you are “pro-choice” on abortion, does that mean you hate babies? If you note that African-Americans are more likely to have higher blood pressure, are you a racist? What about Jews who have Tay-Sachs disease, are you anti-Semitic by noting this? If you note crime statistics about black-on-black crime, are you a racist?
I could go on and on. The point is, we have to be careful about inhibiting robust discussion, especially if certain censors would apply their own prejudices. And sometimes it’s best not to drive underground obnoxious and noxious views that we find offensive. Keep them in the open. Confront them. (Indeed, I’ve helped develop technology that can use that information on social media to identify people who might be a danger to themselves and others.)
But in this blog, my concern is not what I just wrote about — the inherent challenge of censorship, but about more….
Rather, I note that YouTube, in trying to purge videos its censors find to be, for example, “white supremacist” — it actually also purged not those those videos and channels that it says advocate white supremacy, it also removed videos that oppose racism and white supremacy.
For example, it eliminated a video from a university center that studies hate and extremism. Not only did groups opposing racism find their videos taken off, but bloggers who oppose white supremacy were also purged.
The YouTube blunder is hardly isolated. Previously, Facebook decided it had to restrict “anti-gay posts”; so in doing so, it also censored posts by “LGBT users.” How did this happen? Well, “pro-gay” users used terms like “queer.”
One self-proclaimed advocate against “hate” said that tech companies need human moderators, rather than algorithms. Before I continue, ask yourself whether YouTube and Facebook should rely on these outside organizations that have their own agenda to decide what is hate, and who would be the “human moderators.” So far, the “human intervention” has eliminated many YouTubes that take controversial positions but are not hateful. So “human beings” are not the answer.
I would argue that there actually is a danger of human intervention, in general, because these humans may be politically correct, or rely on consultants or self-interested persons who are politically correct. If someone says, “I don’t believe in same-sex marriage” is that an opinion, or is it hate? Well, if you rely on a particular “gay rights” group, it may be deemed hate. If you ask someone who is gay, the response may be different; the gay person may want to know more before condemning the person as a hater.
But the real issue here is technology and technique. I have been involved for several years in developing smart artificial intelligence. We can help YouTube and Facebook, not only to differentiate between “hate” and “controversial opinion” but to overcome the absurd mistakes they make — first, in classifying opinion as hate, and second, as what YouTube did, which is purging material because it mentions words that the “haters” use.
I’ve worked on systems that recognize the difference between discussing an issue, and taking a position and, if so, what position are you taking? This is analogous to public opinion polling where you ask someone what the number one issue is. The person says, for example, capital punishment. but without further probing, you don’t know whether the person’s number one concern is that we need capital punishment and it should be enforced, or that it is wrong or unfair and should be ended.
So, when it comes to social media, I have helped developed an approach where we would not indiscriminately purge something as hateful without using machine learning/artificial intelligence to conclude what the sentiment is. We can monitor and analyze public social media, but if we do what YouTube has done, well — that’s amateur hour. It’s sort of like in that survey question – — concluding that everyone who mentioned capital punishment favors it, or everyone who mentioned capital punishment opposes it.
We can have smart social media, but not without a smart way to monitor social media.