The age of information is not what we all hoped it would be. We successfully digitized the majority of human knowledge, and we even made it freely accessible to most. Now the problem is different, we have too much information. Answers to most questions can be found in thousands of distinct places online, and the new problem is “whos information can we trust?”
What Platforms Think They Should Do About Fake News
Twitter and Facebook have recently been under scrutiny for their censorship of coronavirus related misinformation. For example, a video claiming Hydroxychloroquine is a Corona cure recently went viral on Facebook, and the video keeps getting taken down. The video contains some wild assertions, made by Stella Immanuel, who also believes that gynecological problems are the result of spiritual relationships.
By removing content they believe to be dubious, Twitter and Facebook have made themselves arbiters of truth. Anecdotally, all the posts I’ve seen them remove HAVE contained misinformation, but the fact remains… these platforms have become self-appointed authorities on the veracity of our information.
This is a problem.
So We Can’t Censor?
We certainly can, and we certainly should in some cases. Let’s get some obvious ones out of the way:
- Child Pornography
- Death Threats
There may be some other clear examples where censoring is unquestionably the right choice, though I doubt there are many. Let’s look at some more controversial examples:
- Hate Speech
I would posit that here the answer is contingent on who is doing the censoring. While hate speech and misinformation are disgusting, I don’t want a government deciding what is hate speech, or deciding what is truth.
That said, I certainly want an online system where hate speech and misinformation are effectively filtered out of the conversation. Ideally, every online participant is a virtuous, educated, and concerned conversationalist. If this were the case, posts of an undesirable nature would effectively be ignored due to not receiving the likes, shares, upvotes, and comments they need to spread.
In reality, we can’t have such a pacifistic approach. We need to protect our gardens.
Misinformation – What Should Platforms Do?
It starts here. All online platforms are responsible for the tools they provide for moderation, if not for the moderation itself.
- Remove dangerous content such as doxing, threats, child trafficking, etc
- Provide tools for users to mark content as harmful or misleading
- Mark content as dubious
Platforms should not:
- Remove misleading content
By removing misleading content, platforms run the risk of fueling an argumentum ad martyrdom mentality. Removing information can have an adverse effect, causing people to suspect we have a nefarious reason for removing it.
But the fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.– Carl Sagan, Probably
What Should Users Do About Misinformation and Fake News?
- Read entire articles before liking, sharing, or commenting
- Deploy extra skepticism to information with a clear political or monetary agenda
- Be self-aware about their preconceived notions and confirmation biases
- Look for the primary source of information
- Ensure information is up-to-date
Users should not:
- Reward clickbait titles with engagement
- Exclusively follow, subscribe, or search for content that aligns with their current beliefs
- Assume your position is valid because people are trying to remove your content
- Trust articles and posts coming from sites that appear unsafe
Thanks for reading, now take a course!
Interested in a high-paying job in tech? Land interviews and pass them with flying colors after taking my hands-on coding courses.
Subscribe to my newsletter for more coding articles delivered straight to your inbox.