One person’s “fake news” is another person’s “real news.” That, then is the challenge. How do we discern between someone dismissing information he doesn’t want to hear as “fake news” form information that is objectively misleading, fabricated, false? Is it the headline that is “fake news” or is it the article? How much of it is “fake” — is the “fake” interspersed with the “factual” to mix things up?
And how can you identify “fake news” if you are not an expert. You can try to find corroboration., but wouldn’t you prefer to rely on “trusted sources”? Or can you rely on them? Who can you trust?
All of this is in the context of the changing definitions of “news” and “breaking news” and the seemingly subjective words “fake news” — often used by one partisan to discredit a claim from a competing partisan. For decades, American jurisprudence has defined “defamation” and “libel” and “slander” in certain ways, subject to qualification; and through laws and court decisions and precedents, the courts hae set standards that applied mainly (but not solely) to publication — the printed word. But television and radio were hardly exempt, just as someone speaking at a public event not covered by any media might defame someone.
But now we are in uncharted territory. How do we apply “accepted standards” of print journalism to the social media? Do we differentiate, and how, between web sites per se, and web sites that are affiliated with print or broadcast media, or both? What about the individual that “shares” a defamatory article with others? Should that person be immune because she acted in good faith, or without malice? We could go on and on with hypotheticals.
Major newspapers have online versions. But the standards they applied to print editions may not be same, or applied consistently. They feel the press of immediacy. Look at the case of the Catholic high school boy who instituted lawsuits against CNN, The Washington Post, and other media. In effect, he is saying they did not do their due diligence before reporting things about him that were untrue and hurtful. Should they have higher standards than an individual who shares what turns out to be erroneous information? I think so.
In my own work in this field, I believe we need to preserve the ability of the individual user of social media to say what is on his or her mind, even if some others are offended or insulted. All this poses questions for another discussion – such as how can we preserve courtesy, politeness, civility.
Social media has become the new public forum. Discussions can be heated. People may exchange epithets. But an individual can unfriend another individual. And one person can contest “news” as ‘fake – and challenge another person on social media who posted or shared something. Where we get in trouble is what some people are calling for – more aggressive social media policing by the companies or, perhaps worse, by the government. In either case, the proverbial “cure might be worse than the disease.”
Instead, I propose a way for social media companies to engage a third party to look for systematic fake news – that is interference in the system, not spontaneous exchange between individuals. It is not the responsibility of social media to be politically correct or enforce conformity. But it is their responsibility to be on alert for organized and systematic abuse, let’s say, the Russian interference in social media. And it’s also desirable to look for people who may be a danger to themselves, let’s say, suicide, or to others, let’s say violence.
In a sense, the social media conglomerates need to do this but be at arm’s length. They need to engage a third party or third parties – and provide the ability for an independent third party to pursue this, in an ethical manner. In other words, even if the media conglomerate foots the bill, it is a degree removed. And an additional safeguard is setting fail-safe criteria for action, so that social media conglomerate bureaucrats or, worse, robots, do NOT make unilateral decisions without being held accountable.