There’s a lot of discussion about what constitutes fake news, what impact it has, and whether blocking it is the bigger threat.
I’d like to instead talk about the perception of truth. How do we change the norms of publishing so that a mistake is detected rather than amplified? How do we broaden the context that an article lives within and dynamically update it as time passes and others expand on the original article? In what context are writers presenting their reporting and opinions?
I started reading Neil Postman’s “Amusing Ourselves to Death” the other day. It was written in 1985, but feels very relevant today. I’m not that far into it, but the second chapter delves into the interplay between truth, and how the medium we are communicating in shapes that truth.
As a culture moves from orality to writing to printing to televising, its ideas of truth move with it. … Truth, like time itself, is a product of a conversation man has with himself about and through the techniques of communication he has invented.
that there is a content called “the news of the day”— was entirely created by the telegraph (and since amplified by newer media)… The news of the day is a figment of our technological imagination. … Cultures without speed-of-light media -— let us say, cultures in which smoke signals are the most efficient space-conquering tool available -— do not have news of the day.
It is not really a deep insight to say that the news of the minute, the “trending” news is something we have created with the systems we have built. Lots has (somewhat rightly) focused on Facebook and Google, but the Open Web is much larger than that.
We’ve lowered the barrier to publishing. We’ve changed the medium through which we express truth, but we haven’t really changed the norms or means by which we enable readers to judge truth.
Let’s compare how truth is perceived in some different mediums:
1. Newspaper journalistic standards in the 1980s (for instance) relied on “balance” and “unbiased” language and had separate sections for opinion. All of this got published by some “reputable” source. Some publisher with a long history that was known to the reader. And typically there was not really a lot of competition within any particular geographic region (though this hasn’t always been true) so it was pretty easy to know the range of writers and publishers.
2. Scientific journals rest on citations and peer review. Ostensibly the data/methods are all available so someone could reproduce the experiments. But doing so is often not easy. The perceived truth is determined both by the reputation of the journal (and by proxy, who reviewed it) as well as explicitly referencing other papers that may disagree. A paper that ignores existing literature is unlikely to get past reviewers.
3. For an article on the Web we borrow a lot of our norms from (1) and (2). We have also added a social validation of truth by displaying a count of likes, the number of comments, or the number of people who have shared an article. Rarely is anything shown about the people providing the social validation other than a simple count. Sometimes it is also where the content ranks on in a search, or perhaps what the post is linking to. But the text of a link and the entire content is entirely within the writers control.
Obviously, for all of these cases I am ignoring lots of background on all the work that people do in research, investigation, validation, and writing. The point here is not about what went into publishing it, but rather what does a new reader see? Why does a reader trust it? Of course the reader can judge the written evidence for themselves, but we know of a huge list of cognitive biases that the reader must contend with. Additionally, it takes a lot of time to research the truth of an article.
Both (1) and (2) do nothing to handle the case where the author has simply made a mistake. They have mechanisms for referring to articles published before the current one, but the web is capable of dynamically updating to refer to things published afterwards. An articles does not need to be static.
How do we take the best of (1) and (2) and adapt them into a world where the reader can better judge truth in (3)?
I don’t really have an answer. I’m not sure these are even the right questions, but let’s try an idea, right here and now since this post is currently asking you to determine its validity. Don’t just judge just my words. Judge my words against the words of others in the world. You’ve read this far, let’s add some related content to this article and see how it affects your personal search for the truth. Then let’s reconvene below and discuss some more…
Onward…
How has your perspective of this post changed? Did you open any of the links? Did you get lost in them? Were you overwhelmed by them? Do you feel that you can more accurately judge truth? Do you find my ideas more valid or less valid? Getting back to my click-bait headline: Is this post in accordance with fact or reality (true)?
I haven’t actually read all of those posts. I looked at a few. The Stanford Web Credibility Project from 2002 was very interesting and relevant. Its top recommendation is to:
1. Make it easy to verify the accuracy of the information on your site.
What if in order for a writer’s opinion or reporting of news to be considered a part of humanity’s search for truth that writer is expected to publish it alongside others’ content? What if publishers are expected to give others traffic in order for your words to have any weight? Why should I care what you have to say if you’re afraid to algorithmically link me to other people who are providing other opinions/insights? What if readers learn to instantly dismiss any article that is not willing to automatically link to others?
Displaying related content is a fundamental part of the web now, but so far we have mostly only used it to keep users on our own sites, or to make money with advertisements. Maybe there is another use case? Yes, the user could go to a search engine, but maybe we can improve the truth seeker’s user experience beyond that.
This related content would not need to be static. As posts link to yours, they may get weighted more heavily and show up on your post. So your post does not only link backward in time, it is a living document providing a link to how others have built on your work.
Some systems already try to do this. WordPress has pingbacks where a link from some other site generates a comment on the site it is linking to. It is an attempt to keep an old post connected to the conversation, but there is no weighting of it relative to others. It doesn’t really scale for a very popular post. And a post that is two hops back in the chain is not necessarily considered.
Of course a related content system still has potential problems of bias due to who controls the algorithms. Open sourcing the algorithms would help, as would having a standard mechanism where multiple providers can provide the service in a compatible way. Building trust here would be hard. Getting publishers to trust content from competing publishers to be inserted into their page would be especially difficult.
But maybe by changing the norms through which we judge truth we can get back to seeking truth together. Or maybe there are other ideas for how we should be presenting our articles to help humanity find truth.
Greg,
Thanks for sharing your thoughts on this.
I think another related challenge that makes this even harder is that you also have to make it easier for readers to reason about the fullness of the context provided. X might be true (perhaps “John pulled out a gun and shot Sam”). But the meaning of that could change dramatically based on the context (“Sam was armed and breaking into John’s house, and John shot him” vs “Sam was protecting an innocent who John was robbing, and John shot him”).
In the sense that the challenge isn’t only to verify truthfulness but also the adequacy of the context on which to reason about the truthful statements, I like your idea of linking to other articles. More discussion should correlate to more context. Publishers as well should have an incentive to correct any grossly misleading context (since that would likely result in more traffic for them).
Still, it seems like you might still need a filter in some cases. In the search for truth about climate change, for example, is it actually responsible to make sure deniers are represented in the discussion? Does that help readers determine the truth or does it make it even harder to ascertain the truth despite all the noise? I’m not sure, but I suspect it’s the later.
LikeLike
Hey Evan
In many ways this is an extreme example because there has been so much written and debated. But my first thought is “yes” we should have those opinions represented in the discussion, but also, the scientific community’s evidence should be presented on the sites of those who disagree with climate change as well. If they aren’t willing to also include links to other points of view, then I guess they don’t feel their arguments are very compelling.
Rather than debating them, just point out that they aren’t willing to even link their readers to the opposing points of view.
LikeLiked by 2 people
Hi Greg. I like the idea of aggregating related content, if only for the added context. I feel like Google News uses a porto-version of that aggregation when it lumps together headlines that are ostensibly about the same topic. And the added context is useful when wacky headlines sometimes make it past the filter because it’s relatively easy to pickup that they’re outliers when compared to the other related headlines.
LikeLiked by 1 person