Much of what academia teaches about Epistemology could be considered individual inquiry. Meaning philosophers sit back, close their eyes, and contemplate ideas using nothing but the power of their reason to discover some great truth about reality. I am no exception, sitting in a recliner, sipping coffee, and writing this post. However, philosophers are climbing out of their armchairs more recently and studying epistemology as a social inquiry.(1) Instead of using reason to formulate individual beliefs, they look at how larger groups develop beliefs. As one might expect, things tend to get a wee bit messy when comparing personal reasoning with social belief. If you want a first-hand experience of just how dirty things can get, I dare you to post anything on Facebook or Twitter about abortion, voting rights, or how to make a PB&J properly. The responses can be brutal. But this post is not about how individual inquiry differs from social reasoning. That is far too lengthy a topic. Instead, I want to shed some light on how reasoned information (and misinformation) spread through social networks like a contagion and what might be done about it.
None of us are perfect in our ability to reason. When engaged in individual inquiry, reaching a false conclusion is simply our fault, or is it? One can only reason about those things within their experience. It is doubtful I can invent something anyone else would care about without having some experience to prod a thought into my thick skull, and the ebb and flow of our thoughts do not pop into existence of their own volition. They are a result of some outside interaction. When contemplating an idea heard from a friend or read in a blog, or on tv, the newspaper, etc., we are often subject to the bias of whatever medium pushed the idea into our brain in the first place. That bias might prejudice our beliefs and make us reach a wrong conclusion, even if we reason clearly. This holds doubly true in a social setting. Individually, we can sometimes fight off the biases that creep into our reasoning. In a group environment, it is much more difficult. The question is, what can be done about it?
According to a theory called “conciliationism,” we should constantly adjust our beliefs in the direction of testimony that conflicts with our own intuition when confronted by an epistemic peer (semi-technical jargon meaning someone we believe can reason to a similar degree). Even though we might have a good reason not to believe our peers, the theory tries to control the inherent biases and emotional investment that tend to form as the righteousness of our own claims. People do not like to be wrong, so when challenged, the natural reaction is to defend one’s beliefs, even in the face of contradictory information. There are good arguments against consiliationism, but they all lean towards some methodology of correcting for self-doubt. They all go deeper into the individual issues to determine whether the doubt is rational, is there authority behind the belief, justification of the belief, and how to calibrate for evidence or guessing. These methods are undoubtedly helpful but simply impractical to apply to every piece of information that enters our brain. As a result, society tends to outsource a lot of their belief formation based on more general categories related to how much credence is lent to the believability of a claim rather than the scrutiny of a claim itself.
How much credence a belief attains can come from many places. The word credence is a philosophically loaded term for the processes of attributing probabilities to certain propositions. In the epistemology of credence, there again are methodologies we can use to determine how much probability we should assign to the sources of our information. In that sense, credence probabilism is a normative study. With proper consideration and correct logic, some philosophical calculus can be produced that tells us how we ought to consider our sources. But again, to quote the meme, “ain’t nobody got time for that.” Instead of considering probabilities, credence is often less about calculus and more about emotion, especially in a social setting. As one might guess, the emotions of a large group do not always lend themselves to rational thought.
I shan’t go into all the dynamics that might skew how we lend credence individually. Nor do I think there is room in a blog post to consider all the ins and outs of credence probabilism. However, conformity bias, where individual agents are more likely to espouse the beliefs of a social group even if they secretly disagree, plays a significant role. But so does the perceived authority of sources of testimony, the consideration of different types of evidence, and even evolutionary forces like agenticity (a theory of belief-dependent realism where what we believe determines our reality instead of what is).(2) In all of this, I think it is fair to say that when emotions and not some more rigorous methodology rules the way we judge information, we tend to get things wrong … a lot. And when we lend credence to the wrong source of information, we mislead ourselves, formulate erroneous beliefs, invest emotionally in the rightness of that belief, and unintentionally mislead our peers … a lot. How do we know this? By using network-based epistemology models skewed for credence, it is well established that our beliefs spread more rapidly based on the credence they are given than their veracity.(1)
Ok, now comes the damnable thing of it all … one might think that the way to battle lousy information in an epidemic of conformity biased, credence skewed, non-normative, evolutionarily influenced, emotion-driven beliefs would be to flood a social network with reasoned information disproving the belief. Of course, you would be wrong, but not for what you might think. Using our epistemology models of contagion, we know that when false information is lent credence in an extensive network of people, it spreads more quickly than truthful information can overtake it. As a result, social groups are more likely to internalize those false beliefs more quickly. Especially in large networks like social media, trying to combat inaccurate beliefs with a flood of well-reasoned beliefs is futile. This is because in a static contagion model, meaning one without using skewing for credence, there is little hope people will internalize the well-reasoned information before becoming emotionally invested in their false belief. In fact, according to the “Zollman effect,” the more groups with vast network connections communicate on any given topic, the less likely they are to reach a correct consensus.(3)
Things look bleak, but there is hope! Understanding how the contagion of false beliefs spread, the way to combat inaccurate information is to flood the network with data that skews the credence factors of the network, not the challenge the data itself. This sounds counterintuitive but bear with me … the factors that cause people to lend credence to incorrect information are the same factors that affect how people judge well-reasoned information. When a methodology is used to scrutinize a belief, the data that supports false propositions crumble and reduce the supporting credence. Happily, well-reasoned beliefs enjoy a bonus of having increased credence as the propositions tend to survive scrutiny. When applied to credence-skewed epistemic models, the spread of high-credence data has a doubling effect. Not only do skeptical recipients of well-reasoned data change their credence scores towards well-supported premises, but the agents offering highly credible data also increase how they view the credence of their own propositions, making them more confident in their own beliefs, which in turn more intensely effects the next skeptic they encounter. As the contagion of high-credence agents in a network skews towards good information, it skews the entire network towards more reasonable beliefs. Put bluntly, instead of challenging someone who holds a false belief with contradictory facts; it is more effective to challenge the credence of their belief by demonstrating that an alternative view is more righteously justified. Shake the confidence in their belief system, and maybe, just maybe, a contagion of credence will take hold allowing more reasonable beliefs to thrive.
1. Goldman A, O’Connor C. Social Epistemology. In: Zalta EN, editor. The Stanford Encyclopedia of Philosophy [Internet]. Spring 2021. Metaphysics Research Lab, Stanford University; 2021 [cited 2021 Aug 1]. Available from: https://plato.stanford.edu/archives/spr2021/entries/epistemology-social/
2. Grayling AC. Psychology: How we form beliefs. Nature. 2011 Jun;474(7352):446–7.
3. What is the Zollman effect? [Internet]. [cited 2021 Aug 1]. Available from: https://jarche.com/2020/11/what-is-the-zollman-effect/