Who and what do we believe when we find ourselves in this new world of Fake News? When the algorithms work against us and we can’t see who is behind the news story, the viral story or video, and even our politicians spread misinformation.
For transparency reasons and because of the nature of this topic, all resources for this page are linked directly to this page and in the text for quick and easy access. The same resources can also be found on the citations page.
Let’s Begin with Fake News & the FCC Fairness Doctrine
In 1929, the Federal Radio Commission, the predecessor to the FCC, in its Great Lakes Broadcasting Co. Decision stated that the “public interest requires ample play for the free and fair competition of opposing views, and the Commission believes that the principle applies to all discussions of issues of importance to the public.” This meant that broadcasters were not only obligated to cover a topic fairly but could not express their own views. This was to “ensure that broadcasters did not use their stations simply as advocates of a single perspective.” (First Amendment Encyclopedia)
By 1940, the restriction on personal views was lifted, and time was allotted for discussing the topics after both sides were presented. By 1987, journalists began to push back on personal attacks and said they (the journalists) should make decisions about balancing the fairness of a story and not the FCC. It was completely overturned and abolished in 1987.
Today we have several things working against us. We have news agencies who have taken advantage of the abolition of the FCC Fairness doctrine and do exactly what the original ruling was meant to protect: Their stations advocate a single perspective. They also determine what they feel is a fair and balanced news story.
We need to start asking the question: Should a random user posting on social media (or elsewhere) be given the same amount of airtime on a topic as a professional in their field? Doing this suggests that they (a random user and a professional in the field) have the same authority on a subject, but is that true? If we think they should be given the same authority, then why should they be giving the same authority? What value does being a professional in a field hold if any random person can now be an authority on any topic? Who are we supposed to listen to? Who should we be listening to?
When major media outlets flood the news stream with their personal perspective and the media outlet determines the free and fair competition of opposing views and can ultimately bring in anyone, authority or not on a subject - where do we really turn for an unbiased source? What does unbiased look like?
More information about the FCC Fairness Doctrine can be found at the First Amendment Encyclopedia.
FAKE NEWS is divided into 3 categories:
News that is made up or invented to discredit others or make money (objective).
News that has basis in fact but is spun to fit a particular agenda (objective).
News that people don’t feel comfortable hearing or don’t agree with (subjective).
This is done for many reasons including:
It’s cheaper to make.
It’s difficult and costly for people reading the information to tell the difference between what is and what isn’t accurate.
People read it because it confirms their beliefs/bias.
So what is real or quality news?
People are able to get information from a variety of sources they deem reputable: teachers, clergy, politicians, parents, friends and family, people they know and trust. What if that information is incorrect or false? What happens when those we consider reputable sources are no longer the source of good information? How would we know?
Here are some resources to help understand what real or quality news should look like:
“What do we Mean by Quality News” by Aviv Ovadya
“Here’s what Non-Fake News Looks Like” by Columbia Journalism Review
“Fake News vs. Real News: Tips for Evaluating Information” by Northwest Arkansas Community College
This and more information can be found in the Study for the “Assessment of the Implementation fo the Code of Practice on Disinformation”
Identification
This doesn’t mean not to trust anyone, it means to proceed with caution. We also need to proceed with caution when using a word like facts. When we look at data, data can change, and we must proceed with the best data available. We need to be diligent. The following is taken from the Fact Disinformation Guide. For more information on these categories and more please click on the guide.
Check the Source
Check the URL, a lot of sites spoof well established media outlets
Does the URL look fake? Are there misspellings? Or strange domains like .xyz, .ir, .ph, etc.
Does their website have an about page? Sometimes the information about the company will not match the website.
Does the Story Provide Context?
Does the story have a provocative headline or title?
Is the title just click-bait and not supported by the content of the article?
Check the date the story was written, make sure they aren’t providing old stories.
Look for Exaggerated, Sensationalist Language
Does it spark strong emotions as a way of focusing on emotions rather than facts?
Exaggerated language should be treated as a red flag for potential disinformation or biased information.
Does the Outlet Openly Reveal Their Information Resources?
Use of abstract terms like: “American researchers concluded,” “Scientists agree that.” If they are unable or unwilling to reveal their resources this is another thing to look out for.
Poor Grammar & Spelling Mistakes
Are there spelling and grammatical errors?
Is there awkward use of language? A lot of disinformation is created by non-native speakers and translated through online translators. This is another red flag.
Use Fact Checkers
These are just a few…
Resource Links Fact Disinformation Guide This guide also includes information on how to spot bots & trolls online, and how to check if photos & videos are real or fake.
Misinformation, Disinformation, & Propaganda
Two words that became buzzwords after the Cambridge Analytica scandal are misinformation and disinformation. Let's start with the definitions and what they mean.
Let’s first talk about the difference between misinformation and disinformation. They both are information that is false, or misleading. These are similar, but when we think about these things, we need to look at one major component: the source for these plays a key part. Think of it this way:
A regular user, posting, sharing or retweeting anti-vaxx information is misinformation.
Controlled, and concerted efforts to spread specific information that is untrue is disinformation. It is deliberately distorted information that is leaked into the communication stream.
It may seem like a small difference, but for this part of the conversation, it makes a HUGE difference. Misinformation is spread everyday, disinformation is Information Warfare.
The word propaganda has been thrown around a lot more recently along with terms like Fake News. It’s easy to think that we know what people mean but what is the difference between Disinformation and Propaganda?
Propaganda tries to convince us to believe something
Disinformation is a highly organized attempt to deceive us into believing something.
This is an oversimplification, but for the purposes of this page, and the following information, this is the easiest way to think of it. Some of the disinformation that we see is propaganda, some of it is just intended to confuse us, to flood us with information with two aims:
To create fatigue. To generate so much information false or otherwise that we become too tired to do anything about it.
To generate so much information that we stop knowing who or what to believe.
Active Measures
The last term you should be familiar with before moving forward is the term: Active Measures. This was a term coined during the Cold War during 1950's USSR to describe "covert and overt techniques for influencing events and behaviors in foreign countries" (LSE Consulting Final Report). Disinformation was one major element of this operation. Other elements included: Front organizations, agents of influence, fake stories in non-Soviet media, and forgeries.
The goal of Active Measures is to create distrust in the government, the media, and each other so that regardless of the abundance of information that is available people are unable to make sensible conclusions about anything. It is intended to erode trust in the government and government institutions, and pit groups (any groups) against one another: racial groups, gender groups, age groups, anything to help sow discord in the community.
What does it all mean?
Because of our technological landscape, information can spread in ways, at speeds and reach people it never did in the past. This also means that misinformation, disinformation, and bad actors can easily take advantage of this. How do we protect ourselves and others when we find ourselves in a post-truth landscape? What happens when truth no longer matters and algorithms are set up in a way to help the bad information spread faster?
There are hallmarks, or tell-tale signs that we can watch for when reading information on the internet. However, this means that until companies like Google, Facebook, and other Big Data are held accountable for their part in the spread of mis and disinformation, regulations are passed, or we as private citizens can reclaim some ownership of our personal data, we need to understand not only what we’re looking at, but take responsibility for what we share online.
The following should be thought of as a road map. Here are the things to think about, things to look for, and examples of disinformation and Active Measures.
This a compiled list from various sources that you can find in the Mis/Disinfo section of the citations page as well as direct links as the bottom of the “Hallmarks” section to various resources.
Hallmarks of Disinformation Campaigns
Division
Getting groups of people to distrust one another. Instead of trying to make things better, these stories try to make things worse. Whether it’s racial groups, gender groups, or even age groups against one another.
Enablers
Also known as the “useful idiot.” Find someone willing to take the message and spread it to more people. The more credentials someone has or online or media presence the better. In some cases, credentials are falsified.
Big Bold Lies
Creating a lie so outrageous that it couldn’t possibly be believed, or if they could get people to believe it, it would be damaging. This needs to have a tiny bit of truth to it, so that the lie (over time) will become easier to believe.
Deny Everything
When someone does present facts: deny everything. Deny, deny, deny.
Concealment
Hide the origin of the story or make it so that no one cares or asks where the story started. Or if people look for the source, hide the origin of the story to make it look like it came from somewhere else such as using fake websites.
Repetition
Repeat the lie and deny as much as possible. Tell a lie enough times and someone, or even thousands will believe it.
Working Example
1983
This news story about the origin of the AIDS virus was a fake news story release by the Soviet Union. It was published in a small newspaper (Patriot) in New Delhi that was revealed later to have Soviet funding.
1987
Within 4 years a seemingly small story, fueled by the AIDS crisis, was on the nightly news and being reported by major news outlets. There are still echoes of this story online today.
2020
By spring 2020, stories began emerging of COVID-19 (Coronavirus) having been started in a lab in the US. However, this time it wasn’t the Soviet Union, it was China in a war of disinformation with the US.
How it worked:
The Soviet Union concealed the original story by using The Patriot Magazine to publish the story. By using a local newspaper with global reach they were able to hide where the story came from. This is still done today by using proxy websites like Global Research, fake social media accounts to friend locals in communities (it’s easier to believe “friends” than strangers), things that will put distance between the real funding source and the consumer (you) reading the information.
They found an enabler, a “doctor” who was willing to spread the message. People listen to doctors as authorities, and this is often exploited by using people with fake credentials (as they did in this case) or seemingly sympathetic ones. Many times, people in this role do not know they are working as a mouthpiece for foreign governments. Today many of these people push new stories from Russia Today (or RT) and Sputnick, known Russian propaganda and disinformation sites.
They created a big bold lie, when confronted about it they denied it, and repeated the lie in different media outlets until it was finally repeated in the US.
This is done today by many different bad actors, and with the use of technology, it is easier to conceal, and spread the information at alarming rates. Algorithms are currently designed to help the spread of bad information, and it is imperative we quickly learn how to identify the good information from the bad.
The Following are Example of Past & Current Disinformation and Active Measures Campaigns
(Click on photos for more information)
On the Horizon
The last thing is relatively new technology that is making headlines: Deep Fakes. Deep Fakes are a form of AI that use deep learning to create images, videos and sound recordings of people that are entirely fake. The technology used to make these videos is still a ways off from being a significant threat. Mostly the tech today is used to swap faces and voices on celebrities as shown in the video below. However, the example of a TikTok of Tom Cruise in February of 2021 shows how much the technology has improved since it was created in 2017.
Celebrity Deep Fakes
Retrieved from Youtube in March 2021
@yashar Tweet about Tom Cruise Deep Fake