Intentionally deceptive “news” isn’t new. According to Thomas Jefferson, writing on June 11, 1807 to John Norvell:
“To your request of my opinion of the manner in which a newspaper should be conducted so as to be most useful, I should answer ‘by restraining it to true facts & sound principles only.’ yet I fear such a paper would find few subscribers.
[D]efamation is becoming a necessary of life: insomuch that a dish of tea, in the morning or evening, cannot be digested without this stimulant. even those who do not believe these abominations, still read them with complacence to their auditors, and, instead of the abhorrence & indignation which should fill a virtuous mind, betray a secret pleasure in the possibility that some may believe them…”
Our renewed focus on this age-old problem comes courtesy of the 2016 presidential election, when politics-focused fake news stories pervaded social media and the election resulted in a surprise victory for Donald Trump. Would people have voted differently had they not been exposed to false stories claiming, for example, that the Pope Francis endorsed Donald Trump (he did not) or that an F.B.I. agent investigating Hillary Clinton’s email server was involved in a murder-suicide (also untrue)?
It’s certainly possible that deceptive stories containing false facts could manufacture enthusiasm or vitriol for a candidate, turning an undecided voter into a committed one. At a minimum, unfettered fake news makes it harder for voters to make informed decisions.
But what, if anything, can we do to reduce its spread and/or impact?
In the summer of 2017, Pew Research Center and Elon University’s Imagining the Internet Center canvassed scholars, technologists, and other experts, asking them:
“In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?”
About 1,116 people responded, with 51% saying that the information environment will not improve, and 49% saying it would.
Those results don’t inspire confidence in the future.
There were mixed opinions on whether technology-based tools like verification systems or filters could alleviate the problem. For example:
“Christian H. Huitema, former president of the Internet Architecture Board, commented, ‘The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.’”
“A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.”
“John Markoff, retired journalist and former technology reporter for The New York Times, said, ‘I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”*
“Emmanuel Edet, head of legal services at the National Information Technology Development Agency of Nigeria, observed, ‘The information environment will improve but at a cost to privacy.’”
If we can’t rely on advances in technology to manage fake news effectively, then what about the law?
Some of the respondents were optimistic about the law as a tool for regulating fake news, but I don’t agree with them. Intentionally misleading news stories are often protected speech under the First Amendment. Any governmental attempt to restrict content-based, political speech would have to meet a high legal standard (strict scrutiny) that is very difficult for any law or regulation to survive. According to the U.S. Supreme Court, under the assumption that consumers of speech can separate fact from fiction (and want to), the remedy for false speech is true speech, not government regulation. See United States v. Alvarez, 132 S. Ct. 2537 (2012) (plurality opinion/PDF).
However, the First Amendment does not protect false speech that defames a person’s reputation. In those situations, the defamed person may sue the author and publisher of the false story, but the defamed person is often out of luck when the host/disseminator of the false speech is a platform like Facebook. The Communications Decency Act immunizes sites like Facebook from liability. Social media platforms are free to self-regulate fake news, but the law does not require them to do it.
Simply put, we cannot rely on advances in technology or the legal system to solve this age-old, but ever-evolving problem of fake news.
So what’s the solution?
I’m no expert in this area — I’m just a concerned citizen, consumer of news, and voting rights advocate — but I’m inclined to agree with one of the themes that emerged from the Pew/Elon survey:
“Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education”
Since the election, I’ve been more cautious about unreliable news, and I’ve been spending more time teaching my children how to detect it. Recently, I was pleased to find a children’s book to help me with this effort: Nancy Clancy Late-Breaking News by Jane O’Connor (Illustrated by Robin Preiss Glasser). Zayla, my six-year-old, borrowed it from the library.
In this book, Nancy and her classmates staff the Third Grade Gazette, a publication that covers the news most relevant to their young lives: recaps of school trips, cafeteria menu changes, advice by kids for kids, and historical background about their school. As the kids search for the next big story, they learn the importance of reading critically, checking sources, and verifying facts.
If only more adults had these skills–or cared to use them–in 2016. Would it have made a difference?
*For my perspective on anonymity on the internet, see Anonymity Doesn’t Only Protect The Trolls (It Protects Nice People Too).